Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[clusteragent/autoscaling] Impose 100 DatadogPodAutoscaler limit in cluster agent #28684

Merged
merged 33 commits into from
Oct 4, 2024

Conversation

jennchenn
Copy link
Member

@jennchenn jennchenn commented Aug 22, 2024

What does this PR do?

This PR introduces a max heap to keep track of DatadogPodAutoscaler objects that have been seen by the cluster agent, ordered by timestamp. When more than 100 DatadogPodAutoscaler objects are created, we set an error status to let the user know they have exceeded this limit.

Motivation

We only ingest the first 100 created DatadogPodAutoscaler objects. To make it clearer for the user why they are not seeing updates to their autoscaler object, this update sets an error status on the object.

Additional Notes

This limit is enforced in the backend already; this change just makes it clearer to the user why a DPA is not producing any scaling actions.

Possible Drawbacks / Trade-offs

Objects are added to max heap when there is a store update. To check if it is a valid DPA, we check that it exists inside the max heap. There can be delay in the error status update when:

  • A new remote DatadogPodAutoscaler is created (it won't be added to the heap until creation timestamp is updated)
  • We have 100 DPAs, and one is deleted -> the newly valid DPA will still report an error state until it is processed again

Describe how to test/QA your changes

Local

  1. Create enough DPAs to hit the limit (default 100)
  2. Create a new DPA; check that it gets updated with an error status: Autoscaler disabled as maximum number per cluster reached (100)
  3. Delete one of the original 100 DPAs
  4. Check that the new DPA gets updated to not be in error state

Remote
Repeat above steps, but create DPAs through the UI instead.

@agent-platform-auto-pr
Copy link
Contributor

agent-platform-auto-pr bot commented Aug 22, 2024

[Fast Unit Tests Report]

On pipeline 45778940 (CI Visibility). The following jobs did not run any unit tests:

Jobs:
  • tests_flavor_dogstatsd_deb-x64
  • tests_flavor_heroku_deb-x64
  • tests_flavor_iot_deb-x64

If you modified Go files and expected unit tests to run in these jobs, please double check the job logs. If you think tests should have been executed reach out to #agent-devx-help

@pr-commenter
Copy link

pr-commenter bot commented Aug 22, 2024

Regression Detector

Regression Detector Results

Run ID: eb9049cb-460b-4911-80fe-0ced12c57bfe Metrics dashboard Target profiles

Baseline: 0123bf2
Comparison: 4a317d5

Performance changes are noted in the perf column of each table:

  • ✅ = significantly better comparison variant performance
  • ❌ = significantly worse comparison variant performance
  • ➖ = no significant change in performance

No significant changes in experiment optimization goals

Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%

There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.

Fine details of change detection per experiment

perf experiment goal Δ mean % Δ mean % CI trials links
idle_all_features memory utilization +0.50 [+0.43, +0.56] 1 Logs
otel_to_otel_logs ingress throughput +0.49 [-0.32, +1.31] 1 Logs
tcp_dd_logs_filter_exclude ingress throughput +0.00 [-0.01, +0.01] 1 Logs
uds_dogstatsd_to_api ingress throughput -0.01 [-0.10, +0.08] 1 Logs
tcp_syslog_to_blackhole ingress throughput -0.29 [-0.34, -0.24] 1 Logs
file_tree memory utilization -0.30 [-0.38, -0.21] 1 Logs
uds_dogstatsd_to_api_cpu % cpu utilization -0.53 [-1.27, +0.20] 1 Logs
idle memory utilization -0.98 [-1.02, -0.94] 1 Logs
basic_py_check % cpu utilization -1.02 [-3.70, +1.65] 1 Logs
pycheck_lots_of_tags % cpu utilization -1.24 [-3.69, +1.21] 1 Logs

Bounds Checks

perf experiment bounds_check_name replicates_passed
idle memory_usage 10/10

Explanation

A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".

For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:

  1. Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.

  2. Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.

  3. Its configuration does not mark it "erratic".

@pr-commenter
Copy link

pr-commenter bot commented Aug 23, 2024

Test changes on VM

Use this command from test-infra-definitions to manually test this PR changes on a VM:

inv create-vm --pipeline-id=45778940 --os-family=ubuntu

Note: This applies to commit 4a317d5

@jennchenn jennchenn force-pushed the jenn/CASCL-1_autoscaling-impose-100-crd-limit branch from 9ed5f26 to 843d36c Compare August 26, 2024 14:11
Client dynamic.Interface
Lister cache.GenericLister
Workqueue workqueue.RateLimitingInterface
AutoscalerHeap *HashHeap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's not something generic to any controller, should not be part of the generic Controller. Generally speaking, it's an internal part of the controller and there's no probably no reason to take is as creation parameter.

c.store.UnlockSet(key, model.NewPodAutoscalerInternal(podAutoscaler), c.ID)
podAutoscalerInternal := model.NewPodAutoscalerInternal(podAutoscaler)
c.store.UnlockSet(key, podAutoscalerInternal, c.ID)
c.AutoscalerHeap.InsertIntoHeap(autoscaling.TimestampKey{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having to track changes to Store, could be it registered as an observer?

}

// HashHeap is a struct that holds a MaxHeap and a set of keys that exist in the heap
type HashHeap struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HashHeap is currently not thread safe, which could become necessary if we have multiple workers or if we move the store to object locking.

var err error
if !isAdded {
// Number of DatadogPodAutoscaler objects exceeds the limit
podAutoscalerInternal.SetError(fmt.Errorf("Too many DatadogPodAutoscaler objects created, ignoring this one %s", key))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to be a bit more explicit. You also don't need to put the key in the error message as it's already attached to the object itself.
Something like Autoscaler disabled as maximum number per cluster reached (100).

podAutoscalerInternal.SetError(fmt.Errorf("Too many DatadogPodAutoscaler objects created, ignoring this one %s", key))
} else {
// Now that everything is synced, we can perform the actual processing
result, err = c.handleScaling(ctx, podAutoscaler, &podAutoscalerInternal)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be nice to not have the main logic in an else though it would require a to change a bit the code structure.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, I can wait for the changes from #28723

func (c *Controller) updateAutoscalerStatusAndUnlock(ctx context.Context, key, ns, name string, err error, podAutoscalerInternal model.PodAutoscalerInternal, podAutoscaler *datadoghq.DatadogPodAutoscaler) error {
// Update status based on latest state
statusErr := c.updatePodAutoscalerStatus(ctx, podAutoscalerInternal, podAutoscaler)
if statusErr != nil {
log.Errorf("Failed to update status for PodAutoscaler: %s/%s, err: %v", ns, name, statusErr)
// We want to return the status error if none to count in the requeue retries.
if err == nil {
err = statusErr
}
}
c.store.UnlockSet(key, podAutoscalerInternal, c.ID)
return err
}

to be merged so I can make a status update and return early here

}

// InsertIntoHeap returns true if the key already exists in the max heap or was inserted correctly
func (h *HashHeap) InsertIntoHeap(k TimestampKey) bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when you passe zero time value? I believe it should not be inserted

dpa1, dpaTyped1 := newFakePodAutoscaler("default", "dpa-1", 1, time.Now(), dpaSpec, datadoghq.DatadogPodAutoscalerStatus{})
dpa2, dpaTyped2 := newFakePodAutoscaler("default", "dpa-2", 1, time.Now().Add(1*time.Hour), dpaSpec, datadoghq.DatadogPodAutoscalerStatus{})

f.InformerObjects = append(f.InformerObjects, dpa)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can add multiple objects with a single append:

f.InformerObjects = append(f.InformerObjects, dpa, dpa1)
f.Objects = append(f.Objects, dpaTyped, dpaTyped1)

assert.Truef(t, f.autoscalingHeap.Keys["default/dpa-1"], "Expected dpa-1 to be in heap")
assert.Truef(t, f.autoscalingHeap.Keys["default/dpa-2"], "Expected dpa-2 to be in heap")

f.RunControllerSync(true, "default/dpa-0")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test is named CreateDelete but I don't see a test on the delete part? For instance removing dpa-1 should make dpa-2 eligible.

APIVersion: "apps/v1",
},
// Local owner means .Spec source of truth is K8S
Owner: datadoghq.DatadogPodAutoscalerLocalOwner,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You also need a test on remote owner, which should have zero CreationTimestamp value first and then be created in Kubernetes and then get updated with Kubernetes creation timestamp

@@ -214,6 +221,7 @@ func (c *Controller) syncPodAutoscaler(ctx context.Context, key, ns, name string
if err != nil && errors.IsNotFound(err) {
log.Debugf("Object %s not found in Kubernetes during deletion, clearing internal store", key)
c.store.UnlockDelete(key, c.ID)
c.AutoscalerHeap.DeleteFromHeap(key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's code missing to handle updating internal PodAutoscaler with creation timestamp from Kubernetes once the object has been created in Kubernetes.

@jennchenn jennchenn marked this pull request as ready for review September 26, 2024 15:08
@jennchenn jennchenn requested a review from a team as a code owner September 26, 2024 15:08
@@ -177,7 +185,8 @@ func (c *Controller) syncPodAutoscaler(ctx context.Context, key, ns, name string
if podAutoscaler != nil {
// If we don't have an instance locally, we create it. Deletion is handled through setting the `Deleted` flag
log.Debugf("Creating internal PodAutoscaler: %s from Kubernetes object", key)
c.store.UnlockSet(key, model.NewPodAutoscalerInternal(podAutoscaler), c.ID)
podAutoscalerInternal := model.NewPodAutoscalerInternal(podAutoscaler)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Is this change necessary?

@@ -64,6 +64,8 @@ type Controller struct {
verticalController *verticalController

localSender sender.Sender

hashHeap *autoscaling.HashHeap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: The variable name should match usage, in that probably something like limitHeap or countHeap.
Also the variable is more linked to the store and perhaps should be in the block just below store

}
}

// It's a very simple implementation of a notify process, but it's enough in our case as we aim at only 1 or 2 observers
func (s *Store[T]) notify(operationType storeOperation, key, sender string) {
// TODO: if we want to subscribe on set, should we pass the object as well?
func (s *Store[T]) notify(operationType storeOperation, key, sender string, obj any) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not possible to pass obj in because it means you could modify an object silently, without holding the lock.

c.store.Unlock(key)
return autoscaling.Requeue, err
}

podAutoscalerInternal.UpdateCreationTimestamp(podAutoscaler.CreationTimestamp.Time)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect that you put that inside if podAutoscalerInternal.Generation() != podAutoscaler.Generation {, as we're interested in getting the creation timestamp only once (it won't be only once, but when there's a .Spec change, which should not happen frequently)

if !c.hashHeap.Exists(key) {
return fmt.Errorf("Autoscaler disabled as maximum number per cluster reached (%d)", maxDatadogPodAutoscalerObjects)
}

// Check that targetRef is not set to the cluster agent
clusterAgentPodName, err := common.GetSelfPodName()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about this (not exactly related to this PR), I don't think we want to Error all DPA if we cannot get self pod name, we should probably just skip.

@@ -387,7 +398,12 @@ func (c *Controller) deletePodAutoscaler(ns, name string) error {
return nil
}

func (c *Controller) validateAutoscaler(podAutoscaler *datadoghq.DatadogPodAutoscaler) error {
func (c *Controller) validateAutoscaler(key string, podAutoscaler *datadoghq.DatadogPodAutoscaler) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

podAutoscalerInternal should have everything you need in a single object.

@@ -349,3 +357,138 @@ func TestDatadogPodAutoscalerTargetingClusterAgentErrors(t *testing.T) {
})
}
}

func TestLeaderCreateDeleteLocalHeap(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: It's more something like TestPodAutoscalerObjectsLimit than testing the heap itself.

@jennchenn jennchenn requested a review from vboulineau October 2, 2024 21:47
podAutoscalerInternal.UpdateCreationTimestamp(podAutoscaler.CreationTimestamp.Time)
if podAutoscalerInternal.CreationTimestamp().IsZero() {
podAutoscalerInternal.UpdateCreationTimestamp(podAutoscaler.CreationTimestamp.Time)
return autoscaling.Requeue, c.updateAutoscalerStatusAndUnlock(ctx, key, ns, name, nil, podAutoscalerInternal, podAutoscaler)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason why we requeue here? I'd not expect it to be necessary (similar to the Generation update).
nit: I'd move this code close the SetGeneration for consistency.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

originally set it to requeue to trigger a store update earlier, and minimize the amount of time that a status may show an error state incorrectly (i.e. when the limit is not exceeded but the key hasn't yet been added to the heap because it had a zero creation timestamp). moved this and removed the requeue for now as well!


localHash, err := autoscaling.ObjectHash(podAutoscalerInternal.Spec)
localHash, err := autoscaling.ObjectHash(podAutoscalerInternal.Spec())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good find! We need to check all usage of ObjectHash to make sure we do not have other issues.

@@ -420,7 +423,7 @@ func (c *Controller) deletePodAutoscaler(ns, name string) error {
func (c *Controller) validateAutoscaler(podAutoscalerInternal model.PodAutoscalerInternal) error {
// Check that we are within the limit of 100 DatadogPodAutoscalers
key := podAutoscalerInternal.ID()
if !c.limitHeap.Exists(key) {
if !podAutoscalerInternal.CreationTimestamp().IsZero() && !c.limitHeap.Exists(key) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to reach this point while have a zero CreationTimestamp?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point! i can update it to check if the value in the store is 0 if necessary but removed for now

},
Spec: spec,
Status: status,
if gen == -1 { // Create fake pod autoscaler for remote owner
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: avoid duplication and set in dpa if gen > 0 (0 is not a valid value in existing object IIRC).

if gen > 0 {
  dpa.UID = ...
  dpa.Generation = gen
  dpa.CreationTimestamp = ...
}

@jennchenn jennchenn requested a review from vboulineau October 4, 2024 04:01
@jennchenn
Copy link
Member Author

/merge

@dd-devflow
Copy link

dd-devflow bot commented Oct 4, 2024

🚂 MergeQueue: pull request added to the queue

The median merge time in main is 24m.

Use /merge -c to cancel this operation!

@dd-mergequeue dd-mergequeue bot merged commit ef648d6 into main Oct 4, 2024
226 checks passed
@dd-mergequeue dd-mergequeue bot deleted the jenn/CASCL-1_autoscaling-impose-100-crd-limit branch October 4, 2024 16:50
@github-actions github-actions bot added this to the 7.59.0 milestone Oct 4, 2024
@jennchenn jennchenn added the qa/done QA done before merge and regressions are covered by tests label Oct 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog component/autoscaling qa/done QA done before merge and regressions are covered by tests team/containers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants