Networking
- create vpc
- create internet gateway
- create public route table
- create private route table
- create public route
- attach public route table to subnets
Computing
- create security group which allow ssh from 0.0.0.0/0
- create security group that allow ssh and port 3000 from vpc cidr only
- create ec2(bastion) in public subnet with security group from 7
- create ec2(application) private subnet with security group from 8
- create two workspaces terraform and production
- create two variable definition files(.tfvars) for the two environments
- separate network resources into network module
- apply your code to create two environments one in us-east-1 and eu-central-1
- run local-exec provisioner to print the public_ip of bastion ec2
- upload infrastructure code on github project
- create jenkins image with terraform installed inside it
- create pipeline that takes env-param to apply terraform code on certain env
- verify your email in ses service
- create lambda function to send email
- create trigger to detect changes in state file and send the email
Database
Configure your AWS access keys.
[default]
aws_access_key_id = <your_access_key_id>
aws_secret_access_key = <your_secret_access_key>
Create the 2 workspaces
$ terraform workspace new production
$ terraform workspace new terraform
Initialize working directory to download the necessary Terraform plugins..
[default]
terraform init
to create the Bastion host and security group run:
[default]
terraform apply
Once the Bastion host has been created, you can connect to it using SSH. The Bastion host will be assigned a public IP address, which you can use to connect to it from the public internet. For example:
ssh -i /path/to/private/key ec2-user@<bastion-public-ip>
Replace /path/to/private/key with the path to your private SSH key, and with the public IP address of the Bastion host.
build the custom image that contains ansible and docker client
cd jenkins
docker build -t <imageName> -f jenkins_master.dockerfile .
run the image
docker run --name <containerName> -p8080:8080 -d -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/terraform:/usr/bin/terraform <imageName>
for jenkins master at http://localhost:8080/
1- Add AWS Credentials to Jenkins
Navigate to "Manage Jenkins". Select "Manage Credentials". Add a new "AWS Credentials" entry with the necessary access key and secret key.
2- Create a Parameterized Pipeline
Go to "New Item". Enter a name for your pipeline and select "Pipeline". In the pipeline configuration, check "This project is parameterized". Add a choice parameter named ACTION with options like apply and destroy.
3- Build the Infrastructure (Choose Apply) In the pipeline script, include logic to handle the apply action using Terraform or any relevant tool. Trigger the build and select "apply" when prompted.
4- Create a New Node Navigate to "Manage Nodes and Clouds". Click on "New Node". Enter a name for the new node and select the appropriate node type (e.g., "Permanent Agent"). Configure the node settings, including remote root directory and launch method.
5- Create a New Pipeline for the Application Go to "New Item" again. Enter a name for the application pipeline and select "Pipeline". Configure the pipeline as required, including SCM settings and build triggers.
6- Build the Pipeline Again and Choose Destroy Trigger the application pipeline build. When prompted, select "destroy" to tear down the infrastructure.