-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Built-in health check for GCS-FUSE-CSI-DRIVER #258
Labels
enhancement
New feature or request
Comments
It will be challenging for the CSI driver to implement such a monitoring mechanism. Probably we can leverage the CSI Volume Health Monitoring Feature, but since this feature is still alpha, GKE cannot use this feature for now. We are working to expose GCSFuse metrics via CSI. And maybe in the future, applications can rely on these metrics to decide if the GCSFuse volume is healthy. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi all,
Some background:
I am facing this issue GoogleCloudPlatform/gcsfuse#1726
Which is fixed in the newer version of the driver but since I am using stable release channel of GKE, it is not available yet.
Meanwhile for this case and might be relevant also for some other potential issues, I was thinking if I can develop a health check for my running container, using kubernetees livenessProbe/readinessProbe to let kubernetees restart the unhealthy container. To do so, the simplest action would be the ls command, if it fails, then the mount is not live anymore. But since ls command is translated to the list operation which causes the original problem, adding a periodic list operation would make my system more flaky.
I was wondering if there is better way of checking the health of the mount?
The text was updated successfully, but these errors were encountered: