Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race condition on kube config when running in parallel #36

Open
logicbomb421 opened this issue Feb 4, 2020 · 2 comments
Open

Race condition on kube config when running in parallel #36

logicbomb421 opened this issue Feb 4, 2020 · 2 comments
Assignees

Comments

@logicbomb421
Copy link

When running this step in parallel workflows, a race condition on the config.lock file generated when changing the kube context is possible. This is very intermittent, but when it happens, one (or more) steps will error with: error: open /codefresh/volume/sensitive/.kube/config.lock: file exists.

This behavior can be reproduced by running two kubectl config use-context commands in parallel (e.g. two different terminals).

These calls are originating from the use-context command that is present in all install and promotion scripts.

Would it be possible to make this an optional param/create bypass param in order to support use cases where multiple releases are being installed into the same cluster, and the context has been preset?

Thanks!

@pampy
Copy link

pampy commented Mar 26, 2020

I recently hit the same issue and had to go back to sequential steps.
It's funny that the documentation explicitly has an example which doesn't mention a race condition:
https://codefresh.io/docs/docs/deploy-to-kubernetes/custom-kubectl-commands/#example-of-parallel-deployment-with-kubectl

@ScottMillard
Copy link

Just experienced this also. If there is no solution will have to move back to sequential.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants