-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IBCDPE] Split out and start creating items as individual modules #8
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥 Thanks for working on this - just going to pre-approve, I just have minor comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is nice - thanks for adding this!
@@ -0,0 +1,21 @@ | |||
resource "spacelift_policy_attachment" "bfauble-enforce-tags-on-resources" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: should these be named sage
instead of bfauble
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When we tear down the existing spacelift stacks and move to a totally programmatic approach yes. I named the first stack 'bfauble', and even if you update the display name the ID of the stack is sticky and does not change.
stacks/dpe-prod/main.tf
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left this in as scaffolding for how I am thinking we'll organize our spacelift resources. Basically we need:
- Modules that define what we can create
- Another set of resources that combine those modules in a meaningful way (ie: This is what I am intending this
dpe-prod/main.tf
to do. This is where we create the individual "stacks" we'll be deploying out. - Another set of spacelift specific resources (Like policies, stacks, contexts) and their relationships. When a spacelift stack is created it requires that you are pointing at a specific VCS location where it's pulling terraform resources from. That VCS location is defining the resources that "this" spacelift stack i going to deploy.
modules/internal-k8-infra/main.tf
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have a list of questions you'd like spot to answer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- What terraform modules are required to install the V2 spot ocean components into our EKS cluster?
- What dependencies need to be completed, and operational before attempting to install any spot ocean terraform resources?
- What does the removal of ocean's terraform resources entail, is everything being properly reset back to the "before" state if removed?
stacks/dpe-prod/main.tf
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we currently using these modules?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. This is planned future work to start programmatically creating the entire stack within spacelift
desired_size = 1 | ||
min_size = 0 | ||
max_size = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is still something I'd like to figure out, spot claimed on the call that we didn't need a managed node group.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the problem is that in order for the add ons to be installed in the first place EKS has to have at least 1 node running as that is where the tools are installed to. This means that in order for the EKS cluster to be "green" it has to have at least a single node.
After this point we can install spot ocean.
Even their docs here: https://github.com/spotinst/terraform-spotinst-ocean-eks?tab=readme-ov-file#following-modules-should-be-used-instead--
Specify that we should:
- Use the AWS EKS provider to create an EKS cluster and it's resources
- Then install the ocean controllers/k8s
Problem:
Solution:
Testing: