-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KVStore Tools #177
base: master
Are you sure you want to change the base?
KVStore Tools #177
Conversation
- backup - upgrade - disable - include in vars and post-install steps
Possible other features to include:
|
@arcsector This is a great PR, especially the migration part. As I mentioned in the other comments, the commands need authentication, so we either need to add them, or start the whole task with |
- clean - destructive resync - get kvstore captain - get shcluster captain
Fixed all the auth issues (sorry it slipped my mind) in ce2c80a |
- name: Backup KVStore | ||
include_tasks: adhoc_backup_kvstore.yml | ||
vars: | ||
- archive_name: "-archiveName preAnsibleVersionUpgradeBackup" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this maybe be customizable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eh, not in my opinion, but it's trivial to do so.
- removed unused var - check that we can backup before we do - checks are changed_when false
@@ -69,6 +70,10 @@ splunk_shc_target_group: shc | |||
splunk_shc_deployer: "{{ groups['shdeployer'] | first }}" # If you manage multiple SHCs, configure the var value in group_vars | |||
splunk_shc_uri_list: "{% for h in groups[splunk_shc_target_group] %}https://{{ hostvars[h].ansible_fqdn }}:{{ splunkd_port }}{% if not loop.last %},{% endif %}{% endfor %}" # If you manage multiple SHCs, configure the var value in group_vars | |||
start_splunk_handler_fired: false # Do not change; used to prevent unnecessary splunk restarts | |||
splunk_enable_kvstore: true | |||
splunk_kvstore_storage: undefined # Can be defined here or at the group_vars level - accepted values: "wiredTiger" or "undefined", which leaves as default | |||
splunk_kvstore_version: undefined # Can be defined here or at the group_vars level - accepted values: 4.2 or "undefined", which leaves as default1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see this variable used either
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see, the splunk_kvstore_version
is unused - I'll add it to the conditionals for the bottom of the upgrade procedure
…-for-splunk into feat-kv-migration
Guess I accidentally created a merge commit - my bad. Feel free to remove - I'm not brave enough to force push to a fork. |
After some preliminary testing, there are some issues that need to be addressed here.
|
@jewnix My thoughts:
|
So here is what I think. The destructive resync should be removed from this PR. 1. Because this is a snowflake issue, and destructive KVStore sync is not something that is documented. 2. Because this also destroys the SHC completely. |
deleting destructive resync task
@dtwersky @jewnix Sorry I've been inactive on this, I removed the destructive resync, and I added a default value, though it's not for |
Get SHCluster and KVstore status as JSON blobs
Updating this with oplog size increase, as well as some helpful tasks to get KVStore-status and SHCluster-status as JSON blobs for ansible consumption. I will note this isn't using the docs' oplog increase method, but rather a method that support had been passing around for ages a while ago, so if it is requested that it reflects this document, I can do that instead. Let me know! |
Hi @arcsector , Sorry this was left dormant for so long after so much work has gone in to this. I have been working internally to figure out all of this for a while on a different project, that was more for ephemeral docker instances, but I discovered a lot of things related to this PR that made me look at KVStore upgrades a little differently. There are so many differences between Splunk versions, MongoDB versions and MongoDB engines regarding to upgrade paths. I'm not sure if we should assume that people are still running version 8, and because later versions already automatically migrate and update, there may only be a need to run some of these commands in specific scenarios only. There are so many amazing things in this PR, and I don't want to close this out and start from fresh, but maybe it needs to be revisited, and think if we want to make this compatible with older versions, or major version jumps. What are your thoughts? |
Thanks so much for the positive comments, glad you like the materials here - I'm definitely open to revisiting this as a PR of optional tasks and then making a playbook that calls all of them to do an all-in-one upgrade. Does that sound like a good plan - I could even put them in a sub-folder Do you happen to have a good map of those version transitions and what they entail as far as mongod version and engine? I'm having to go through the docs and switch back and forth between versions, as it's not clear what the approach should even be... Thanks! |
Summary
This PR provides additional KVStore tools available to the user to be configured, including: