Replies: 2 comments
-
To provide an example, I think it should be possible to write a basedOn: template://k3s
provision:
- file: template://install/helm
- script: |
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update bitnami
helm install wordpress bitnami/wordpress \
--set service.type=NodePort \
--set volumePermissions.enabled=true \
--set mariadb.volumePermissions.enabled=true This will only work if the provisioning script from I know that there are workarounds, like relying on the fact that |
Beta Was this translation helpful? Give feedback.
-
I realized that This means combining multiple So I propose switching to using The only issue is that somebody might have used the current algorithm to move the default mount location for the home directory to a different mount point in mounts:
- location: '~'
mountPoint: /home/guest Right now this would modify the existing mount. With the proposed change, this would create an additional mount. While not ideal, I don't think this should break anything (famous last words alert!). So I guess I have a 3rd question:
|
Beta Was this translation helpful? Give feedback.
-
I've been working on the
basedOn
feature that I've discussed previously at #2520 (reply in thread).A template can take a list of other (base) templates to provide default settings:
Each base template can recursively be
basedOn
additional templates.It already works quite nicely, maintaining YAML comments from both the instance and the base templates as appropriate.
I do want to use the same mechanism during instance start for merging
defaults.yaml
andoverride.yaml
(thebasedOn
mechanism is only executed during instance create and the assembled template is then stored in the instance directory).The existing merge algorithm is basically:
There are some exceptions to that (e.g.
dns
lists work like scalar values and are not appended).Both
mounts
andnetworks
are combined in reverse order (lowest to highest). I believe I did this because both use a shared key (mounts[].location
andnetworks[].interface
) to update the lower priority settings with higher priority ones later in the list (which are then discarded)1.Otherwise the order of
mounts
andnetworks
shouldn't really matter (except maybe for the buggy behaviour of overlapping reverse-sshfs mounts).I had assumed that we also concatenated the
provision
andprobes
list in reverse order, so that the highest level scripts run last, and can adapt to the lower level one running before. But we don't actually do so.Do you think anyone relies on a provisioning script from
override.yaml
to run before the provisioning scripts of the regular template? I can't think of any of our bundled scripts that would be configurable by another script running first.So here are my questions:
Can we change the order of
mounts
andnetworks
as long as the combining mechanism on the shared key continues to work the same way?Should we reverse the order of combined
provision
andprobes
scripts?Footnotes
I noticed that for consistency
additionalDisks
should probably be treated the same way, withadditionalDisks[].name
being the shared key, even though I don't really see much of a use case for it. ↩Beta Was this translation helpful? Give feedback.
All reactions