-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[IBCDPE-935] Setting up declerative defintion of TF resources #11
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥 LGTM! Going to pre-approve. I really like the structure, just have some questions about future state in terms of CI/CD of dev/prod and where you envision this going.
name = "sage-aws-vpc" | ||
terraform_provider = "aws" | ||
administrative = false | ||
branch = "ibcdpe-935-vpc-updates" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you thinking we would copy dev
and have a prod
when we get there? Or would we parametrize {branch} based off of the branch we are working on and have dev
and prod
set up as blue green deployments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the docs they talk a little bit about it here:
https://docs.spacelift.io/integrations/source-control/github#multi-stack-version
One frequent type of setup involves two similar or even identical environments - for example, staging and production. One approach would be to have them in a single repository but in different directories, setting project_root runtime configuration accordingly. This approach means changing the staging directory a lot and using as much or as little duplication as necessary to keep things moving, and a lot of commits will necessarily be no-ops for the production stack. This is a very flexible approach, and we generally like it, but it leaves Git history pretty messy and some people really don't like that.
I was thinking of not using long-lived branches for this project (No dev, staging, branch ect...) and instead handle it by moving resources between the appropriate folders are we moved items up. I do recognize that also brings some complexity and possibility to leave behind changes if they're not moved/copied to the appropriate locations.
Conversely long-lived branches has similar issues too where 1 branch can be wildly out-of-date; You have conflicts between branches where you cannot merge dev into staging (for example), you have to merge your feature branch into dev and staging. Which that could also bring problems where a feature branch doesn't get moved to production for a long time, and dev/staging are in an awkward state.
This is all to say - I haven't fully settled on one or the other.
What does @Sage-Bionetworks-Workflows/dpe think on this matter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This talks a bit about trunk based development:
https://trunkbaseddevelopment.com/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thomasyu888 Here is that blog post that I finally found:
They show using some structures like:
Which is close to what we are doing here, instead, we are just putting the dev/prod/staging into the root directory.
The thing though is that it does not cover what moving from dev -> staging looks like :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something else I was thinking about this as well. One way we could go about this is that we do not put any environment specific logic within the stacks we are building. So rather than having an environment specific folder, and then a replication of all the TF resource files we:
- Create a spacelift
context
for each of the environments that we are going to be running the stacks in - Create spacelift
stacks
for each of those environments so they're all handled with different CICD processes - Have 1 definition of the cloud resources that we apply to each environment with variables coming
- If we need to have any environment specific logic we can conditionally create it based on the variables defined in the context
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that idea. That is sort of the way sceptre has it to have configurations per environment and then having central infrastructure stacks that aren't duplicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent - I am going to follow up and do the work as a part of this ticket: https://sagebionetworks.jira.com/browse/IBCDPE-953
Problem:
Solution: