Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two mounted buckets and sigterm during write causes sidecar to hang #257

Open
aptenodytes-forsteri opened this issue May 15, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@aptenodytes-forsteri
Copy link

aptenodytes-forsteri commented May 15, 2024

Summary

I'm having an issue where, over time, I end up with many "stuck" pods where the main container has errored or exited and the sidecar is running. This may have to do with having two buckets mounted or with trying to write a lot of files to the out bucket.

Details

I'm running this version of the csi-driver:

Running Google Cloud Storage FUSE CSI driver sidecar mounter version v0.1.14-gke.2 

To reproduce the issue, I have mounted two buckets, an "in" and an "out" bucket.

The container runs a simple python script that writes to the out bucket then errors e.g.:

def main():
  for i in range(0,100):
    with open(f"/out/{i}.txt", "w", encoding="utf8") as f:
      f.write(f"{i}")
  raise Exception("oops")

if __name__ == "__main__":
  main()

If I run this container enough times, I end up with an increasing number of "hanging" pods which never terminate.

Here is some of the log output from the sidecar container:

I0510 08:04:21.519063       1 sidecar_mounter.go:158] connecting to socket "/gcsfuse-tmp/.volumes/gcs-fuse-out/socket"
I0510 08:04:21.519240       1 fdchannel.go:48] get the underlying socket
I0510 08:04:21.519262       1 fdchannel.go:60] calling recvmsg...
I0510 08:04:21.519516       1 fdchannel.go:69] parsing SCM...
I0510 08:04:21.519543       1 fdchannel.go:76] parsing SCM_RIGHTS...
I0510 08:04:21.519788       1 sidecar_mounter.go:71] start to mount bucket "out-data" for volume "gcs-fuse-out"
I0510 08:04:21.519895       1 main.go:122] waiting for SIGTERM signal...
I0510 08:04:21.520059       1 sidecar_mounter.go:299] gcsfuse config file content:
logging:
    file-path: /dev/fd/1
    format: text
I0510 08:04:21.520259       1 sidecar_mounter.go:96] gcsfuse mounting with args [gcsfuse --gid 0 --temp-dir /gcsfuse-buffer/.volumes/gcs-fuse-out/temp-dir --config-file /gcsfuse-tmp/.volumes/gcs-fuse-out/config.yaml --implicit-dirs --app-name gke-gcs-fuse-csi --foreground --uid 0 out-data /dev/fd/3]...
time="10/05/2024 08:04:21.538051" severity=INFO message="Start gcsfuse/1.4.1-gke.0 (Go version go1.21.5) for app \"gke-gcs-fuse-csi\" using mount point: /dev/fd/3\n"
time="10/05/2024 08:04:21.538520" severity=INFO message="GCSFuse mount command flags: {\"AppName\":\"gke-gcs-fuse-csi\",\"Foreground\":true,\"ConfigFile\":\"/gcsfuse-tmp/.volumes/gcs-fuse-out/config.yaml\",\"MountOptions\":{},\"DirMode\":493,\"FileMode\":420,\"Uid\":0,\"Gid\":0,\"ImplicitDirs\":true,\"OnlyDir\":\"\",\"RenameDirLimit\":0,\"CustomEndpoint\":null,\"BillingProject\":\"\",\"KeyFile\":\"\",\"TokenUrl\":\"\",\"ReuseTokenFromUrl\":true,\"EgressBandwidthLimitBytesPerSecond\":-1,\"OpRateLimitHz\":-1,\"SequentialReadSizeMb\":200,\"MaxRetrySleep\":30000000000,\"StatCacheCapacity\":4096,\"StatCacheTTL\":60000000000,\"TypeCacheTTL\":60000000000,\"HttpClientTimeout\":0,\"MaxRetryDuration\":-1000000000,\"RetryMultiplier\":2,\"LocalFileCache\":false,\"TempDir\":\"/gcsfuse-buffer/.volumes/gcs-fuse-out/temp-dir\",\"ClientProtocol\":\"http1\",\"MaxConnsPerHost\":100,\"MaxIdleConnsPerHost\":100,\"EnableNonexistentTypeCache\":false,\"StackdriverExportInterval\":0,\"OtelCollectorAddress\":\"\",\"LogFile\":\"\",\"LogFormat\":\"json\",\"ExperimentalEnableJsonRead\":false,\"DebugFuseErrors\":true,\"DebugFuse\":false,\"DebugFS\":false,\"DebugGCS\":false,\"DebugHTTP\":false,\"DebugInvariants\":false,\"DebugMutex\":false}"
time="10/05/2024 08:04:21.538594" severity=INFO message="GCSFuse mount config flags: {\"CreateEmptyFile\":false,\"Severity\":\"INFO\",\"Format\":\"text\",\"FilePath\":\"/dev/fd/1\",\"LogRotateConfig\":{\"MaxFileSizeMB\":512,\"BackupFileCount\":10,\"Compress\":true}}"
time="10/05/2024 08:04:21.538609" severity=INFO message="Creating Storage handle..."
time="10/05/2024 08:04:21.541713" severity=INFO message="Creating a mount at \"/dev/fd/3\"\n"
time="10/05/2024 08:04:21.541751" severity=INFO message="Creating a temporary directory at \"/gcsfuse-buffer/.volumes/gcs-fuse-out/temp-dir\"\n"
time="10/05/2024 08:04:21.541856" severity=INFO message="Creating a new server...\n"
time="10/05/2024 08:04:21.541881" severity=INFO message="Set up root directory for bucket out-data"
time="10/05/2024 08:04:21.648172" severity=INFO message="Mounting file system \"out-data\"..."
time="10/05/2024 08:04:21.648361" severity=INFO message="File system has been successfully mounted."
I0510 08:04:56.524242       1 main.go:131] all the other containers terminated in the Pod, exiting the sidecar container.
I0510 08:04:56.524282       1 main.go:134] sending SIGTERM to gcsfuse process: /gcsfuse --app-name gke-gcs-fuse-csi --foreground --uid 0 --gid 0 --temp-dir /gcsfuse-buffer/.volumes/gcs-fuse-in/temp-dir --config-file /gcsfuse-tmp/.volumes/gcs-fuse-in/config.yaml --implicit-dirs in-data /dev/fd/3
I0510 08:04:56.524331       1 main.go:134] sending SIGTERM to gcsfuse process: /gcsfuse --gid 0 --temp-dir /gcsfuse-buffer/.volumes/gcs-fuse-out/temp-dir --config-file /gcsfuse-tmp/.volumes/gcs-fuse-out/config.yaml --implicit-dirs --app-name gke-gcs-fuse-csi --foreground --uid 0 out-data /dev/fd/3
I0510 08:04:56.524394       1 main.go:149] received SIGTERM signal, waiting for all the gcsfuse processes exit...
I0510 08:04:56.526891       1 main.go:106] [gcs-fuse-in] gcsfuse was terminated.

We see sigterm sent to both input and output buckets, but only the input bucket actually gets terminated.

Expected

The sidecar always terminates when the main container exits.

Actual

The sidecar sometimes stays running because the second mount process never exits.

Hunches

Perhaps there is some implicit assumption of one bucket and when two are mounted, there's some hard coded shared state that makes the termination flaky?

Perhaps while GCS fuse is "busy" writing it can't receive the sigterm?

@HoangViet144
Copy link

I have experienced the same issue.

I mount only one bucket but using multiple mount points for multiple paths. However, I wrote less than 20 files to mounted folders (each file's size was less than 500 kb). This issue sometime happened, which caused my k8s job fail due to waiting too long.

It seemed that all files were written to GCS. But gcs-fuse sidecar was stuck when handling SIGTERM.

Can using gcs-fuse as init-container (as described in #168 (comment)) fix the problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants