-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[clusteragent/autoscaling] Add local fallback fields to DatadogPodAutoscalerInternal
#30776
base: main
Are you sure you want to change the base?
Conversation
[Fast Unit Tests Report] On pipeline 50566539 (CI Visibility). The following jobs did not run any unit tests: Jobs:
If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=50566539 --os-family=ubuntu Note: This applies to commit 5dd8695 |
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 88a8e04 Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | tcp_syslog_to_blackhole | ingress throughput | +1.51 | [+1.43, +1.59] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | +1.28 | [-1.67, +4.23] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | +0.50 | [-0.23, +1.23] | 1 | Logs |
➖ | file_tree | memory utilization | +0.26 | [+0.12, +0.39] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.17 | [-0.47, +0.80] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | +0.09 | [-0.75, +0.92] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.07 | [-0.70, +0.84] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | +0.03 | [-0.83, +0.89] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | -0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.01 | [-0.11, +0.09] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | -0.02 | [-0.77, +0.74] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | -0.03 | [-0.80, +0.74] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.15 | [-0.61, +0.30] | 1 | Logs |
➖ | quality_gate_idle | memory utilization | -0.20 | [-0.24, -0.16] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_500ms_latency | egress throughput | -0.26 | [-1.03, +0.51] | 1 | Logs |
➖ | otel_to_otel_logs | ingress throughput | -0.93 | [-1.58, -0.28] | 1 | Logs |
➖ | quality_gate_idle_all_features | memory utilization | -3.13 | [-3.25, -3.00] | 1 | Logs bounds checks dashboard |
Bounds Checks: ❌ Failed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
❌ | file_to_blackhole_100ms_latency | lost_bytes | 9/10 | |
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
) | ||
|
||
// Annotations represents the relevant annotations on a DatadogPodAutoscaler object | ||
type Annotations struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the fact that the info comes from annotations is important, the info itself is important. I would see it as a RecommenderConfiguration
which happens to be from annotations in DPA parsing only.
That could directly be a field in DPAInt customRecommenderConfiguration *recommenderConfiguration
(in which case the file may not be necessary)
// Annotations represents the relevant annotations on a DatadogPodAutoscaler object | ||
type Annotations struct { | ||
Endpoint string | ||
FallbackEndpoint string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we want to support another endpoint for fallback (for now).
|
||
// exported for testing purposes | ||
const ( | ||
AnnotationsURLKey = "autoscaling.datadoghq.com/url" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you already have a JSON for settings, it can simplified as a single JSON object:
autoscaling.datadoghq.com/custom-recommender: {"url": <string>, "settings": {...} }
type Annotations struct { | ||
Endpoint string | ||
FallbackEndpoint string | ||
Settings map[string]string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Settings map[string]string | |
Settings map[string]any |
I think we can keep the types (int, string, etc.) so that the recommender receives typed info. This is what the JSON/Proto allows: https://github.com/DataDog/agent-payload/blob/master/jsonschema/WorkloadRecommendationsRequest.json#L51-L74
} | ||
|
||
// ParseAnnotations extracts the relevant autoscaling annotations from a kubernetes annotation map | ||
func ParseAnnotations(annotations map[string]string) Annotations { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure the function needs to be exported. It should be call within the package and filled directly by the DatadogPodAutoscalerInternal
operations (UpdateFromPodAutoscaler
, so that we guarantee consistency of the object).
scalingValues ScalingValues | ||
|
||
// mainScalingValues represents the scaling values retrieved from the product |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// mainScalingValues represents the scaling values retrieved from the product | |
// mainScalingValues represents the scaling values retrieved from the main recommender (product, optionally a custom endpoint). |
// RemoveValues clears autoscaling values data from the PodAutoscalerInternal as we stopped autoscaling | ||
func (p *PodAutoscalerInternal) RemoveValues() { | ||
p.scalingValues = ScalingValues{} | ||
} | ||
|
||
// RemoveMainValues clears main autoscaling values data from the PodAutoscalerInternal as we stopped autoscaling | ||
func (p *PodAutoscalerInternal) RemoveMainValues() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any difference with UpdateFromMainValues(ScalingValues{})
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, no difference, mostly added to stay consistent with the RemoveValues()
function above but can remove these two
@@ -148,16 +159,36 @@ func (p *PodAutoscalerInternal) UpdateFromSettings(podAutoscalerSpec *datadoghq. | |||
p.horizontalEventsRetention = getHorizontalEventsRetention(podAutoscalerSpec.Policy, longestScalingRulePeriodAllowed) | |||
} | |||
|
|||
// UpdateFromValues updates the PodAutoscalerInternal from a new scaling values | |||
// UpdateFromValues updates the PodAutoscalerInternal scaling values |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally we would need to refine this API so that it's not complete freedom from controller 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this validation (ensuring that scalingValues
is always either mainScalingValues
or fallbackScalingValues
) be done in one of the follow-up PRs to make use of fallbackScalingValues
/mainScalingValues
? There are no changes to the controller logic yet so scalingValues
is the only field being used at the moment
@@ -320,11 +356,21 @@ func (p *PodAutoscalerInternal) CreationTimestamp() time.Time { | |||
return p.creationTimestamp | |||
} | |||
|
|||
// ScalingValues returns the scaling values of the PodAutoscaler | |||
// ScalingValues returns a pointer to the active scaling values of the PodAutoscaler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a pointer
…_adapt-dpai-local-fallback
…dpai-local-fallback
What does this PR do?
Add new fields to
DatadogPodAutoscalerInternal
to support local fallback. This PR adds the following toDatadogPodAutoscalerInternal
:customRecommenderConfiguration
to track annotation configuring a custom recommender on aDatadogPodAutoscaler
resource (autoscaling.datadoghq.com/custom-recommender
)mainScalingValues
, ofScalingValues
type to represent scaling values from RCfallbackScalingValues
, ofScalingValues
type to represent scaling values from local fallbackscalingValues
is now a representation of the active scaling values that should be used (eithermainScalingValues
orfallbackScalingValues
)and some helper functions. Existing functions that build/read from status are updated to account for local scaling values.
ScalingValues
definitions are also moved out ofrc_schema.go
Configuration fields will likely need to be added later.
Motivation
First of changes to support local fallback
Describe how to test/QA your changes
Behavior should be unchanged right now
Possible Drawbacks / Trade-offs
scalingValues
so we know which one is activecustomRecommenderConfiguration
is only updated when scaling fromDPA
(i.e. not from UI)Additional Notes
Note: currently
scalingValues
is the only set of values being used - this will need to be changed in subsequent PRs (storing values from RC inmainScalingValues
and choosing the correct set forscalingValues
).Adding new source type requires DataDog/datadog-operator#1513