You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently when we run test flows with kind, we run kubetest2 kind --up --down which will also delete the kind cluster after the test flow finishes in the normal way. But if the Prow job is interrupted in the middle, the cluster will not be properly deleted thus will be causing memory leakage, as described in kubernetes-sigs/kind#303 (comment)
We should add a trap command to make sure the cluster can still be cleaned up even when the Prow job is interrupted.
The text was updated successfully, but these errors were encountered:
you also most likely need to set the prowjob timeout / grace period to something other than the 15s? default. bootstrap.py used to do 15m by default, so prow.k8s.io uses that as the default (it's configurable for a prow instance), otherwise your scripts don't get much time to cleanup.
unfortunately nested kubernetes brings quite a suite of problems, you probably want to look through #303 for anything else you've missed (e.g. inotify).
Currently when we run test flows with
kind
, we runkubetest2 kind --up --down
which will also delete the kind cluster after the test flow finishes in the normal way. But if the Prow job is interrupted in the middle, the cluster will not be properly deleted thus will be causing memory leakage, as described in kubernetes-sigs/kind#303 (comment)We should add a trap command to make sure the cluster can still be cleaned up even when the Prow job is interrupted.
The text was updated successfully, but these errors were encountered: