Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[clusteragent/autoscaling] Add local fallback fields to DatadogPodAutoscalerInternal #30776

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

jennchenn
Copy link
Member

@jennchenn jennchenn commented Nov 5, 2024

What does this PR do?

Add new fields to DatadogPodAutoscalerInternal to support local fallback. This PR adds the following to DatadogPodAutoscalerInternal:

  • customRecommenderConfiguration to track annotation configuring a custom recommender on a DatadogPodAutoscaler resource (autoscaling.datadoghq.com/custom-recommender)
  • mainScalingValues, of ScalingValues type to represent scaling values from RC
  • fallbackScalingValues, of ScalingValues type to represent scaling values from local fallback
  • -> scalingValues is now a representation of the active scaling values that should be used (either mainScalingValues or fallbackScalingValues)

and some helper functions. Existing functions that build/read from status are updated to account for local scaling values.

ScalingValues definitions are also moved out of rc_schema.go

Configuration fields will likely need to be added later.

Motivation

First of changes to support local fallback

Describe how to test/QA your changes

Behavior should be unchanged right now

Possible Drawbacks / Trade-offs

  • both sets of scaling values need to be stored at all times, and one set is duplicated to scalingValues so we know which one is active
  • customRecommenderConfiguration is only updated when scaling from DPA (i.e. not from UI)

Additional Notes

Note: currently scalingValues is the only set of values being used - this will need to be changed in subsequent PRs (storing values from RC in mainScalingValues and choosing the correct set for scalingValues).

Adding new source type requires DataDog/datadog-operator#1513

@github-actions github-actions bot added the short review PR is simple enough to be reviewed quickly label Nov 5, 2024
@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Nov 5, 2024

[Fast Unit Tests Report]

On pipeline 50566539 (CI Visibility). The following jobs did not run any unit tests:

Jobs:
  • tests_flavor_dogstatsd_deb-x64
  • tests_flavor_heroku_deb-x64
  • tests_flavor_iot_deb-x64

If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Nov 5, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv aws.create-vm --pipeline-id=50566539 --os-family=ubuntu

Note: This applies to commit 5dd8695

Copy link

cit-pr-commenter bot commented Nov 5, 2024

Regression Detector

Regression Detector Results

Metrics dashboard
Target profiles
Run ID: 4f507e13-b5b7-4637-b1ec-cb8c256e47fe

Baseline: 88a8e04
Comparison: 5dd8695
Diff

Optimization Goals: ✅ No significant changes detected

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
tcp_syslog_to_blackhole ingress throughput +1.51 [+1.43, +1.59] 1 Logs
quality_gate_logs % cpu utilization +1.28 [-1.67, +4.23] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization +0.50 [-0.23, +1.23] 1 Logs
file_tree memory utilization +0.26 [+0.12, +0.39] 1 Logs
file_to_blackhole_300ms_latency egress throughput +0.17 [-0.47, +0.80] 1 Logs
file_to_blackhole_0ms_latency egress throughput +0.09 [-0.75, +0.92] 1 Logs
file_to_blackhole_1000ms_latency egress throughput +0.07 [-0.70, +0.84] 1 Logs
file_to_blackhole_0ms_latency_http2 egress throughput +0.03 [-0.83, +0.89] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput -0.00 [-0.01, +0.01] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.01 [-0.11, +0.09] 1 Logs
file_to_blackhole_100ms_latency egress throughput -0.02 [-0.77, +0.74] 1 Logs
file_to_blackhole_0ms_latency_http1 egress throughput -0.03 [-0.80, +0.74] 1 Logs
file_to_blackhole_1000ms_latency_linear_load egress throughput -0.15 [-0.61, +0.30] 1 Logs
quality_gate_idle memory utilization -0.20 [-0.24, -0.16] 1 Logs bounds checks dashboard
file_to_blackhole_500ms_latency egress throughput -0.26 [-1.03, +0.51] 1 Logs
otel_to_otel_logs ingress throughput -0.93 [-1.58, -0.28] 1 Logs
quality_gate_idle_all_features memory utilization -3.13 [-3.25, -3.00] 1 Logs bounds checks dashboard

Bounds Checks: ❌ Failed

perf experiment bounds_check_name replicates_passed links
file_to_blackhole_100ms_latency lost_bytes 9/10
file_to_blackhole_0ms_latency lost_bytes 10/10
file_to_blackhole_0ms_latency memory_usage 10/10
file_to_blackhole_0ms_latency_http1 lost_bytes 10/10
file_to_blackhole_0ms_latency_http1 memory_usage 10/10
file_to_blackhole_0ms_latency_http2 lost_bytes 10/10
file_to_blackhole_0ms_latency_http2 memory_usage 10/10
file_to_blackhole_1000ms_latency memory_usage 10/10
file_to_blackhole_1000ms_latency_linear_load memory_usage 10/10
file_to_blackhole_100ms_latency memory_usage 10/10
file_to_blackhole_300ms_latency lost_bytes 10/10
file_to_blackhole_300ms_latency memory_usage 10/10
file_to_blackhole_500ms_latency lost_bytes 10/10
file_to_blackhole_500ms_latency memory_usage 10/10
quality_gate_idle memory_usage 10/10 bounds checks dashboard
quality_gate_idle_all_features memory_usage 10/10 bounds checks dashboard
quality_gate_logs lost_bytes 10/10
quality_gate_logs memory_usage 10/10

Explanation

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

CI Pass/Fail Decision

Passed. All Quality Gates passed.

  • quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
  • quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
  • quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.

@github-actions github-actions bot added medium review PR review might take time and removed short review PR is simple enough to be reviewed quickly labels Nov 5, 2024
@jennchenn jennchenn added changelog/no-changelog qa/done QA done before merge and regressions are covered by tests component/autoscaling labels Nov 5, 2024
@jennchenn jennchenn marked this pull request as ready for review November 5, 2024 21:26
@jennchenn jennchenn requested a review from a team as a code owner November 5, 2024 21:26
@jennchenn jennchenn requested a review from vboulineau November 7, 2024 15:04
)

// Annotations represents the relevant annotations on a DatadogPodAutoscaler object
type Annotations struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the fact that the info comes from annotations is important, the info itself is important. I would see it as a RecommenderConfiguration which happens to be from annotations in DPA parsing only.

That could directly be a field in DPAInt customRecommenderConfiguration *recommenderConfiguration (in which case the file may not be necessary)

// Annotations represents the relevant annotations on a DatadogPodAutoscaler object
type Annotations struct {
Endpoint string
FallbackEndpoint string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to support another endpoint for fallback (for now).


// exported for testing purposes
const (
AnnotationsURLKey = "autoscaling.datadoghq.com/url"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you already have a JSON for settings, it can simplified as a single JSON object:

autoscaling.datadoghq.com/custom-recommender: {"url": <string>, "settings": {...} }

type Annotations struct {
Endpoint string
FallbackEndpoint string
Settings map[string]string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Settings map[string]string
Settings map[string]any

I think we can keep the types (int, string, etc.) so that the recommender receives typed info. This is what the JSON/Proto allows: https://github.com/DataDog/agent-payload/blob/master/jsonschema/WorkloadRecommendationsRequest.json#L51-L74

}

// ParseAnnotations extracts the relevant autoscaling annotations from a kubernetes annotation map
func ParseAnnotations(annotations map[string]string) Annotations {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the function needs to be exported. It should be call within the package and filled directly by the DatadogPodAutoscalerInternal operations (UpdateFromPodAutoscaler, so that we guarantee consistency of the object).

scalingValues ScalingValues

// mainScalingValues represents the scaling values retrieved from the product
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// mainScalingValues represents the scaling values retrieved from the product
// mainScalingValues represents the scaling values retrieved from the main recommender (product, optionally a custom endpoint).

// RemoveValues clears autoscaling values data from the PodAutoscalerInternal as we stopped autoscaling
func (p *PodAutoscalerInternal) RemoveValues() {
p.scalingValues = ScalingValues{}
}

// RemoveMainValues clears main autoscaling values data from the PodAutoscalerInternal as we stopped autoscaling
func (p *PodAutoscalerInternal) RemoveMainValues() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any difference with UpdateFromMainValues(ScalingValues{})

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, no difference, mostly added to stay consistent with the RemoveValues() function above but can remove these two

@@ -148,16 +159,36 @@ func (p *PodAutoscalerInternal) UpdateFromSettings(podAutoscalerSpec *datadoghq.
p.horizontalEventsRetention = getHorizontalEventsRetention(podAutoscalerSpec.Policy, longestScalingRulePeriodAllowed)
}

// UpdateFromValues updates the PodAutoscalerInternal from a new scaling values
// UpdateFromValues updates the PodAutoscalerInternal scaling values
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally we would need to refine this API so that it's not complete freedom from controller 🤔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this validation (ensuring that scalingValues is always either mainScalingValues or fallbackScalingValues) be done in one of the follow-up PRs to make use of fallbackScalingValues/mainScalingValues? There are no changes to the controller logic yet so scalingValues is the only field being used at the moment

@@ -320,11 +356,21 @@ func (p *PodAutoscalerInternal) CreationTimestamp() time.Time {
return p.creationTimestamp
}

// ScalingValues returns the scaling values of the PodAutoscaler
// ScalingValues returns a pointer to the active scaling values of the PodAutoscaler
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a pointer

@github-actions github-actions bot added long review PR is complex, plan time to review it and removed medium review PR review might take time labels Dec 4, 2024
@jennchenn jennchenn requested a review from vboulineau December 12, 2024 16:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog component/autoscaling long review PR is complex, plan time to review it qa/done QA done before merge and regressions are covered by tests team/containers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants