-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce apiserver lease duration and GC interval to 20s #125965
Conversation
Welcome @carreter! |
Hi @carreter. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: carreter The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test any implications for scalability? |
As a heads up, after some discussion it's been decided that konnectivity-network-proxy will be rolling its own lease controller and not depending on the apiserver leases. I'd imagine that reducing this duration would still be useful for other situations in which an up-to-date count of apiservers is necessary. EDIT: GKE will still end up going the apiserver lease route, though the plan is still to extend KNP to include the lease controller. |
Giving this a bump. To answer @aojea 's questions to the best of my ability:
This does add a non-trivial number of requests to the k8s apiserver, as leases will have to be renewed every 10secs rather than every hour. I believe this is worth the cost in order to maintain an up-to-date count of the identities and status of the apiservers.
I don't think so. Assuming the cost incurred of having a relatively short lease duration is acceptable, it seems to me that there would be no need to increase it again. |
/hold The choice of a long time for this lease was intentional, not accidental. We chose a long duration because the consequences of apiserver identity being improperly reaped are severe enough to cause malfunctioning of core capability like LISTing CRs, namespace deletion, and garbage collection. See the reasoning here: https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1965-kube-apiserver-identity#caveats While there might be some appetite for making this configurable, a skim of the intent here appears to be to use this lease in a manner similar to an endpoint slice. That sort of usage needs to account for the various readyz, healthz, livez checks. While this may appear to be similar, it is a building block for very different usage related to apiserver coordination on storage versions. |
Closing this issue as we have opted to use a custom lease controller to manage KNP leases rather than depend on the apiserver leases given the issues raised by @deads2k. Thanks for the feedback everyone! |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Reduces apiserver lease duration and garbage collection interval from 1 hour to 20 seconds (twice the lease renewal interval). This allows for faster detection of whether an apiserver is unhealthy, as its lease will expire much more quickly if it is not being actively renewed.
Allows other services to use apiserver leases as a reliable way of determining the number of available apiservers. Current specific use-case is for konnectivity-network-proxy agents to dynamically determine the number of k8s apiservers (and thus proxy servers assuming a 1:1 apiserver:proxy server mapping, which is usually the case). See this issue and this design doc for more information.
Which issue(s) this PR fixes:
N/A
Special notes for your reviewer:
N/A
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: