-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reset fixes when you have secondary storage off the main / with symlinks #102
Conversation
… just the masters. This allows localized k3s commands and simplifies patching scripts to allow the node to drain itself before rebooting
While this might work in your use case, I think this might be too specific to merge in. |
I think the change will work for all, I was careful to still delete the files to clean up... just not get a failure when trying to delete the symlink... |
This includes changes that are not necessary and it also delete items we need to clean up |
- "{{ systemd_dir }}/k3s-node.service" | ||
- /etc/rancher/k3s | ||
- /run/k3s | ||
- /run/flannel | ||
- /etc/rancher/ | ||
- /var/lib/kubelet | ||
- /var/lib/rancher/k3s |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is part of this cleanup. We need this for k3s clean up
- name: find service files within the directories that can by symlinks, binaries and data for people who have separate volume and symlink for /var/lib/rancher and /var/lib/kubelet | ||
ansible.builtin.find: | ||
paths: | ||
- /var/lib/rancher/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't seem to be where k3s places its files unless you are doing something custom, which we cannot support custom configurations
@@ -160,6 +160,13 @@ | |||
--kubeconfig ~{{ ansible_user }}/.kube/config | |||
changed_when: true | |||
|
|||
- name: Fetch cluster config file from first master to push to agent nodes - allowing agent nodes to run k3s kubectl xxx (patching, convenience) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change was also included in #101 which we aren't going to merge
I am closing this because although it might be a feature you want, it isn't something we want to include in the core offering of this repo. If you would like this feature you may need to maintain a fork. Thank you! |
They were duplicative. Sent from my iPhoneOn Sep 26, 2022, at 6:35 PM, Techno Tim ***@***.***> wrote:
@timothystewart6 commented on this pull request.
In roles/reset/tasks/main.yml:
- "{{ systemd_dir }}/k3s-node.service"
- - /etc/rancher/k3s
- /run/k3s
- /run/flannel
- /etc/rancher/
- - /var/lib/kubelet
- - /var/lib/rancher/k3s
this is part of this cleanup. We need this for k3s clean up
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I’m ok with you canceling this, but I don’t think you understand it. Sent from my iPhoneOn Sep 26, 2022, at 6:36 PM, Techno Tim ***@***.***> wrote:
@timothystewart6 commented on this pull request.
In roles/reset/tasks/main.yml:
@@ -29,20 +29,44 @@
loop_control:
loop_var: mounted_fs
+#
+# it may become or be a common practice to separate /var/lib/rancher and /var/lib/kubelet into their own volumes. Maybe it would be better to just move /var altogether, but that's not
+# terribly convenient for remote or headless machines (e.g. raspberry pi compute modules, embedded in Turing Pi or Desk Pi where you can't get to single user mode.
+# Recommendation, in this case, is to mount a new volume, such as /k3s-data or similar and symlink /var/lib/rancher and /var/lib/kubelet into directories
+# other k3s data points like /run/k3s are small/transient/in memory and shouldn't be on a volume
+#
+# The approach is to delete the files in the folders first, then remove the folders/files that are not symlinks in the follow up
+#
+- name: find service files within the directories that can by symlinks, binaries and data for people who have separate volume and symlink for /var/lib/rancher and /var/lib/kubelet
+ ansible.builtin.find:
+ paths:
+ - /var/lib/rancher/
This doesn't seem to be where k3s places its files unless you are doing something custom, which we cannot support custom configurations
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I do see what is going on, a find before a delete however this is a copy of the script that k3s uses to clean up (which I think we should switch to at some point). It also included a lot of extra changes in this PR. |
I think we might switch to this at some point too, which might alleviate our issue #108 |
Proposed Changes
So this one is -- well a bit more complex and maybe something different is correct. For my setup, I have
/var/lib/rancher symlinked to /k3s-data/rancher
and
/var/lib/kubelet symlinked to /k3s-data/kubelet
This is to move the bulk of the data off the RPI's EMMC/SD card and keep etcd data off the card, which I understand is a good thing (e.g. leaving etcd on the card is a bad thing). There may be better ways to move these volumes off the main storage, but this seems to make sense to me.
In either case, the reset.yaml would fail because it didn't want to delete symlinks and it would also clear/delete the kubelet symlink because of the lack of the trailing / on that folder.
The fix is to take the folders that may be symlinked (logically) and use ansible find to get the list and delete the files in a loop later. Not pretty, and maybe there's a better way, but this seemed to fit the bill.
Comments/thoughts welcome...
Happy to share my kuebprep script that sets up each of them using /dev/sda as a volume and does all the linking, leaving some space for longhorn as well...
Checklist
site.yml
playbookreset.yml
playbook