Welcome to the tf-aws-rsvl
project. This project demonstrates how to set up a Terraform backend configuration option in Terraform that allows us to store and manage the state files of our infrastructure in a secure remote location with versioning and state locking using AWS S3 for remote storage and AWS DynamoDB for state locking.
By default, Terraform stores state files locally with the file named terraform.tfstate
. When working with Terraform in a team, the use of a local file makes Terraform state management complicated because each time when users apply infrastructure changes they always have the latest state data locally stored on their machine. With remote state, Terraform writes the state data to a remote location, which can then be shared between all team members.
Terraform state versioning is a practice of keeping different versions of the Terraform state file. In case of accidental deletion or corruption, we can view and restore the state file from its previous versions.
Terraform state locking is a way to prevent users from concurrent edits to our state file. We use Terraform state locking to avoid making the Terraform state files get corrupted because two or more users might do the terraform apply
command at the same time and this will be a problem.
The screenshot below shows how this works when two people do terraform apply or terraform plan at the same time.
- AWS S3 for State Storage: Stores Terraform state files in a secure and durable S3 bucket.
- AWS DynamoDB for State Locking: Uses a DynamoDB table to lock the state file when it's being modified, preventing concurrent modifications.
- Versioning: Enables versioning on the S3 bucket to allow for state recovery in case of accidental deletions or modifications.
- Server-Side Encryption: Uses server-side encryption on the S3 bucket to protect your data at rest.
Before you begin, make sure you have the following:
- Terraform installed on your local machine
- An AWS account
- IAM user account with permissions to create and delete S3 Bucket and DynamoDB.
- AWS credentials of the IAM user configured in your environment or terminal.
- Basic knowledge of AWS, Terraform, S3, and DynamoDB
To deploy the infrastructure, follow these steps:
- Clone this repository to your local machine.
- Navigate to the directory containing the Terraform configuration files.
- Modify the
variables.tf
to your preference and make sure to make the S3 bucket_name unique. - Modify the
dynamodb.tf
read_capacity
andwrite_capacity
value based on your preference. - Initialize your Terraform workspace with
terraform init
. - Create a plan with
terraform plan
. - Apply the plan with
terraform apply
. This will provision the S3 bucket and DynamoDB. - Take note of the Output result of
dynamodb_table_name
ands3_bucket_name
as we need to use the values to enter that into our Terraform backend configuration. - Then we move the local Terraform state remotely by modifying the
backend.tf
file and uncommenting and changing the values ofbucket
anddynamodb_table
from the Output result we had on steps 7-8.
- Another
terraform init
, then our existing state will be copied to the remote S3 bucket we created earlier
- If you wish to use this in a production environment it is recommended to modify the
dynamodb.tf
parameterdeletion_protection_enabled = false
totrue
to prevent accidental deletion of the dynamodb_table - If you wish to delete the S3 Bucket and DynamoDB from terraform destroy command, make sure you uncomment the
force_destroy = true
froms3.tf
, be careful and always review the destroy plan before saying yes.
Please feel free to modify this project and use this repo for your own needs.
This project is distributed under the GNU General Public License (GPL) v3.0 License. See LICENSE.txt for more information.
If you have any questions or feedback, feel free to reach out.