Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CONTINT-3524] Collect metrics of SBOM check (long running) in agent status #22313

Merged
merged 12 commits into from
Feb 16, 2024

Conversation

AliDatadog
Copy link
Contributor

@AliDatadog AliDatadog commented Jan 24, 2024

What does this PR do?

Add the SBOM check to the output of agent status and implements a generic wrapper to convert other checks as well. It implements the solution of the following RFC. Example:

    ----
      Instance ID: sbom [OK]
      Long Running Check: true
      Configuration Source: file:/etc/datadog-agent/conf.d/sbom.yaml
      Total Metric Samples: 0
      Total Events: 0
      container-sbom Total: 504
      Total Service Checks: 0

Motivation

We would like to monitor long running check status. SBOM is one of them.

Additional Notes

Adding a field LongRunning bool to the MetricsStat data structure was necessary for formatting. Interval can't be 0. Otherwise stats wouldn't be collected and the check would be scheduled only once.

Possible Drawbacks / Trade-offs

The long running check will not go thorugh the normal loop.

Describe how to test/QA your changes

  1. First that the SBOM check stats are now collected
    Deploy the agent with SBOM collection enabled. Here is the setting in the helm chart: https://github.com/DataDog/helm-charts/blob/716fc5c56344d64ddf9a1841e9d70d96a7a8fd94/charts/datadog/values.yaml#L707

Wait a few seconds and run agent status. It should output the SBOM check status. Look at the logs, every 15s check metrics should be collected:

2024-01-24 19:57:26 UTC | CORE | INFO | (pkg/collector/worker/check_logger.go:40 in CheckStarted) | check:sbom | Running check...
2024-01-24 19:57:26 UTC | CORE | INFO | (pkg/collector/worker/check_logger.go:59 in CheckFinished) | check:sbom | Done running check
2024-01-24 19:57:41 UTC | CORE | INFO | (pkg/collector/worker/check_logger.go:40 in CheckStarted) | check:sbom | Running check...
2024-01-24 19:57:41 UTC | CORE | INFO | (pkg/collector/worker/check_logger.go:59 in CheckFinished) | check:sbom | Done running check
  1. Make sure agent status and agent launch-gui are still working as expected
    The templates were refactored. We need to make sure that these two commands still work as expected.

Reviewer's Checklist

  • If known, an appropriate milestone has been selected; otherwise the Triage milestone is set.
  • Use the major_change label if your change either has a major impact on the code base, is impacting multiple teams or is changing important well-established internals of the Agent. This label will be use during QA to make sure each team pay extra attention to the changed behavior. For any customer facing change use a releasenote.
  • A release note has been added or the changelog/no-changelog label has been applied.
  • Changed code has automated tests for its functionality.
  • Adequate QA/testing plan information is provided. Except if the qa/skip-qa label, with required either qa/done or qa/no-code-change labels, are applied.
  • At least one team/.. label has been applied, indicating the team(s) that should QA this change.
  • If applicable, docs team has been notified or an issue has been opened on the documentation repo.
  • If applicable, the need-change/operator and need-change/helm labels have been applied.
  • If applicable, the k8s/<min-version> label, indicating the lowest Kubernetes version compatible with this feature.
  • If applicable, the config template has been updated.

@pr-commenter
Copy link

pr-commenter bot commented Jan 24, 2024

Bloop Bleep... Dogbot Here

Regression Detector Results

Run ID: 28bec092-46ce-4e0d-97a3-aa84686c6ed3
Baseline: 7b0edbb
Comparison: 18d626f
Total CPUs: 7

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Experiments ignored for regressions

Regressions in experiments with settings containing erratic: true are ignored.

perf experiment goal Δ mean % Δ mean % CI
file_tree memory utilization +1.01 [+0.95, +1.08]
idle memory utilization +0.66 [+0.63, +0.69]
file_to_blackhole % cpu utilization -0.83 [-7.37, +5.71]

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI
otel_to_otel_logs ingress throughput +2.40 [+1.66, +3.15]
file_tree memory utilization +1.01 [+0.95, +1.08]
idle memory utilization +0.66 [+0.63, +0.69]
process_agent_standard_check_with_stats memory utilization +0.32 [+0.28, +0.36]
process_agent_real_time_mode memory utilization +0.28 [+0.25, +0.30]
process_agent_standard_check memory utilization +0.14 [+0.10, +0.18]
uds_dogstatsd_to_api ingress throughput +0.00 [-0.04, +0.04]
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.05, +0.05]
trace_agent_json ingress throughput -0.00 [-0.04, +0.04]
trace_agent_msgpack ingress throughput -0.00 [-0.03, +0.02]
uds_dogstatsd_to_api_cpu % cpu utilization -0.58 [-2.03, +0.86]
tcp_syslog_to_blackhole ingress throughput -0.77 [-0.82, -0.71]
file_to_blackhole % cpu utilization -0.83 [-7.37, +5.71]

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@AliDatadog AliDatadog force-pushed the ali/long-running-check branch 2 times, most recently from 72ab34c to 720893b Compare January 24, 2024 20:10
@AliDatadog AliDatadog force-pushed the ali/long-running-check branch from 720893b to f37a8d8 Compare January 24, 2024 20:12
@AliDatadog AliDatadog marked this pull request as ready for review January 24, 2024 20:13
@AliDatadog AliDatadog requested review from a team as code owners January 24, 2024 20:13
@AliDatadog AliDatadog added this to the 7.52.0 milestone Jan 24, 2024
@AliDatadog AliDatadog force-pushed the ali/long-running-check branch from 2b50812 to 41b6613 Compare January 24, 2024 22:38
@AliDatadog AliDatadog force-pushed the ali/long-running-check branch from 41b6613 to fc0c15a Compare January 25, 2024 00:04
Copy link
Contributor

@ogaca-dd ogaca-dd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@AliDatadog AliDatadog force-pushed the ali/long-running-check branch from adca09b to 9ae8f3f Compare January 25, 2024 12:37
Copy link
Contributor

@clamoriniere clamoriniere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

@GustavoCaso GustavoCaso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🎉

Copy link
Contributor

@vickenty vickenty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whould it be a problem if sbom check wasn't a long running check? There are several core checks that do work in background, but are not long running. Don't see this discussed in the RFC.

pkg/collector/corechecks/longrunning.go Outdated Show resolved Hide resolved
@clamoriniere
Copy link
Contributor

Whould it be a problem if sbom check wasn't a long running check? There are several core checks that do work in background, but are not long running. Don't see this discussed in the RFC.

hi @vickenty

The name "long running" check comes from the fact that the run() function never ends. But indeed some of them are more "asynchronous" checks based on workload events. But we still want to benefit from the agent check configuration and scheduler capabilities.

For the SBOM check, we need to send the SBOM report as soon as the image is present on the host. That is why we have implemented this check as a "long running check". Migrating the SBOM check as a scheduled check doesn't make sense because it is not the fact we run the check that triggers the generation of the SBOM report payload.

So maybe we can replace “long running” by “asynchronous” for this kind of check if it is more clear in terms of behavior for the end user.

WDYT?

@vickenty
Copy link
Contributor

vickenty commented Feb 6, 2024

@clamoriniere

Migrating the SBOM check as a scheduled check doesn't make sense because it is not the fact we run the check that triggers the generation of the SBOM report payload.

A regular check can very well have background tasks driven by external events, see windows_event_log for a recent example. The fact that Run is called periodically doesn't oblige you to drive the logic from it or even to have any code in it at all (although it is still nice to commit sender from it, so everything gets flushed at predictable and configurable intervals, the same as other checks). Is there anything else you can't do with a regular check that this PR would enable you to?

@AliDatadog
Copy link
Contributor Author

A regular check can very well have background tasks driven by external events, see windows_event_log for a recent example. The fact that Run is called periodically doesn't oblige you to drive the logic from it or even to have any code in it at all (although it is still nice to commit sender from it, so everything gets flushed at predictable and configurable intervals, the same as other checks). Is there anything else you can't do with a regular check that this PR would enable you to?

This PR introduces a reusable template that can easily allow us to port our long running checks as normal checks. I think it corresponds well to what you describe as the main logic will run in the background.

Besides, IMO the specifications requires clarification. The fact that the check could start doing some work after calling Configure rather than Run is not a perfect fit for the Check interface. To be honest, I found it a bit confusing.

Rather than migrating tons of checks to more-or-less respect the current interface, I think that defining a common standard for our checks is the easiest way for us and will benefit us for the long term support. WDYT ?

Service Checks: Last Run: {{humanize .ServiceChecks}}, Total: {{humanize .TotalServiceChecks}}
{{- if .TotalHistogramBuckets}}
Histogram Buckets: Last Run: {{humanize .HistogramBuckets}}, Total: {{humanize .TotalHistogramBuckets}}
{{ if .LongRunning -}}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be possible avoid duplicating the template, and only put execution statistics in a condition on .LongRunning? (And same for three other templates).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it is clearer like this, also it conflicts with a previous comment #22313 (comment).

pkg/status/render/fixtures/check_stats.text Outdated Show resolved Hide resolved
@vickenty vickenty dismissed their stale review February 15, 2024 14:44

outdated

Copy link
Member

@GustavoCaso GustavoCaso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@AliDatadog
Copy link
Contributor Author

/merge

@dd-devflow
Copy link

dd-devflow bot commented Feb 16, 2024

🚂 MergeQueue

This merge request is not mergeable yet, because of pending checks/missing approvals. It will be added to the queue as soon as checks pass and/or get approvals.
Note: if you pushed new commits since the last approval, you may need additional approval.
You can remove it from the waiting list with /remove command.

Use /merge -c to cancel this operation!

@dd-devflow
Copy link

dd-devflow bot commented Feb 16, 2024

⚠️ MergeQueue

This merge request was unqueued

If you need support, contact us on Slack #ci-interfaces!

@clamoriniere
Copy link
Contributor

/merge

@dd-devflow
Copy link

dd-devflow bot commented Feb 16, 2024

🚂 MergeQueue

Pull request added to the queue.

There are 6 builds ahead! (estimated merge in less than 2h)

Use /merge -c to cancel this operation!

@dd-mergequeue dd-mergequeue bot merged commit de6cec8 into main Feb 16, 2024
161 of 164 checks passed
@dd-mergequeue dd-mergequeue bot deleted the ali/long-running-check branch February 16, 2024 20:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants