- BACK
- FRONT
-
Frontend: ReactJS app
-
Backend: Monolithic Spring Boot app with JPA
-
Database: MySQL
To see how the app works locally
docker run --name backend
-p <port>:8080
-e MYSQL_DB_PASSWORD=<db-password>
-e MYSQL_DB_USER=<db-user>
-e MYSQL_DB_PORT=<db-port>
-e MYSQL_DB_HOST=<db-host>
-e MYSQL_DB_DATABASE=<db-databasse>
-d marteoma/pizzas-back
The db-params are the values that your database has. The port is wheret the app will run in the host
The frontend runs as a typical react app. To run it locally:
- Put your host on the .env file
REACT_APP_API_URL=<your-host>
- Install modules
npm install
- Run
npm start
Additional you can run tests by running
npm test
The cloud infrastructure is based on ECS with EC2 instances
There is also an RDS with a the MySQL database and a S3 bucket working as hosting for the frontend.
The cloud arquitecture is designed on AWS and implemented in terraform. To create the resources on an AWS first, put your AWS credentials. There are two ways:
- With aws-cli
aws configure
- Manully, set in ~/.aws/credentials
[default]
region=<region>
aws_access_key_id=<acces-key>
aws_secret_access_key=<secret-key>
After having set up the credentials file you can run the terraform steps to create the infrastructure.
-
terraform init
-
terraform apply
In the variables.tf file there are some key variables to have in count.
The instance type the EC2 will have.
variable "instance_type" {
default = "t2.micro"
description = "AWS instance type"
}
The minimum count of EC2 that your cluster can have.
variable "min_cluster_size" {
description = "Minimum size of the cluster"
default = "2"
}
The maximum count of EC2 that your cluster can have.
variable "max_cluster_size" {
description = "Maximum size of the cluster"
default = "3"
}
The number of docker containers running with the app.
variable "desired_tasks" {
description = "Desired number of tasks running"
default = "2"
}
These values should be changed depending on the level of availavility desired.
There are also som values that are hidden, like the password to use in the database. These variables are described in the terraform.tfvars.example. To make it work, create a file terraform.tfvars and add all the values described in the example, and add them a value. Like this:
rds_database_password = "<password>"
As we see in the image we we have the frontend in the S3 bucket, which is connected to an ALB that balance traffic to the differentes instances of the cluster. And the instances connect to an RDS.
The EC2 are in an ASG and the RDS and the EC2 are in a private subnet, so it's not possible to acces directly.
The both pipelines, (front and back), are made with CircleCI.
- Package and test
- Build docker image
- Push docker image to Dockerhub
- Update ECS service
This pipeline needs the following environment variables to work.
-
AWS_ACCESS_KEY_ID Access key of the aws account
-
AWS_DEFAULT_REGION Region for aws
-
AWS_SECRET_ACCESS_KEY Secret key of the aws account
-
DOCKER_LOGIN Username of dockerhub
-
DOCKER_PASSWORD Token of dockerhub
-
MYSQL_DB_DATABASE The name of the database created on AWS
-
MYSQL_DB_HOST The address of the database created on AWS, is an output of terraform
-
MYSQL_DB_PASSWORD The password of the database created on AWS
-
MYSQL_DB_PORT The port of the database created on AWS
-
MYSQL_DB_USER The user of the database created on AWS
- Install modules
- Test
- Build
- Upload to S3
This pipeline needs the following environment variables to work.
-
AWS_ACCESS_KEY_ID Access key of the aws account
-
AWS_DEFAULT_REGION Region for aws
-
AWS_SECRET_ACCESS_KEY Secret key of the aws account
-
BACKEND DNS of the ALB, is an output of terraform
-
BUCKET Name of the S3 bucket
There are two options for monitoring the app; directly on CloudWatch log groups or in the ECS console. On the CloudWatch log groups, there are five log groups to watch the state of the app. These groups are under /ecs. Additional there are alarms, which work together with the autoscalin group.
For the ECS, you can directly watch some metrics of the cluster just by entering the Amazon ECS page. If you go inside the cluster, you can get more information, about the state of the cluster.