-
Notifications
You must be signed in to change notification settings - Fork 85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
csi-driver-node: set GOMAXPROCS #911
Conversation
Thank you @Garfield96 for your contribution. Before I can start building your PR, a member of the organization must set the required label(s) {'reviewed/ok-to-test'}. Once started, you can check the build status in the PR checks section below. |
@Garfield96 Thank you for your contribution. |
@Garfield96 Thank you for the find. Quite interesting indeed. Small nit: I think the original upstream issue is kubernetes-sigs/gcp-compute-persistent-disk-csi-driver#968, which may be interesting. Could please add a comment with this issue and I believe it's okay to proceed. |
Also based on the upstream PR, maybe the idiomatic way is to use the provided csi flag |
6e0a2a8
to
e9622d3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice find, lgtm
How to categorize this PR?
/area performance
/kind enhancement
/platform gcp
What this PR does / why we need it:
Golang creates internal data structures based on the number of available CPUs. As a result, the memory consumption increases on larger machines. However, these data structures would be only needed if the load would scale with the size of the nodes. This isn't the case for CSI-drivers, since the number of volumes doesn't scale with the node size. Therefore, this change overwrites the automatically detected number of CPUs and sets it to 2 using the GOMAXPROCS environment variable. Therefore, memory consumption is stable across all node sizes. As a result, the VPA doesn't have to scale up the resources if a cluster uses many large nodes, which in the worst case could result in pod eviction.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Release note: