Here, two different AWS architectures are presented defining a REST API.
In particular:
-
REST Api Gateway with AWS Lambda integrations (in Python). Lambdas are found here
-
Fargate + CI/CD Pipeline (with application written in NodeJS - Typescript using ExpressJS). The Application can be found here
These are deployed at the same time in the main.tf
Structure of the repo:
.
├── Makefile
├── README.md
├── infracost-breakdown.txt
├── infracost.yml
├── img
│ └── ...
├── scripts
│ └── ...
└── terraform
├── _lambdas
│ └── replace-strings
│ ├── main.py
│ └── requirements.txt
├── _modules
│ └── api-gateway-integration
│ └── ...
├── api-gateway
│ └── ...
├── fargate
│ └── ...
└── ...
With the infracost-breakdown.txt we can have an idea of how much the infrastructure costs per month. This breakdown does not consider all costs that are based 'per use', like the cloudwatch logs costs or the api gateway invocation, lambda invocation, S3 storage etc.. These can be customized in the infracost.yml and used with another infracost breakdown
to have a more realistic idea of the cost. Ideally, one would create three types of usage, a low
usage, medium
and high
to see how much the costs actually increase based on the number of requests (considering also the fact that more requests => more logs).
In the terraform
folder we can find the IaC to deploy the APIs using the two different methods. The modules that are used in the root of the terraform
folder are provided in the terraform/api-gateway
and terraform/fargate
folders. Other helper modules can be found in the terraform/_modules
folder.
For the REST Api gateway lambda functions, there is another folder terraform/_lambdas
which is supposed to contain all the lambda integrations for the API gateway.
In the script folder there is a main.py
(with its requirements.txt
) which can be used to query the API endpoints in the following way:
-
First export the following variables:
export API_ENDPOINT=<api_endpoint> export API_KEY=<api_key>
-
Then, launch the python scrypt:
python3 scripts/main.py query \ -x POST \ -path test/string/replace \ -body '{"content": "<my content>"}'
The makefile is used to deploy the terraform code. The Makefile will pass the backend configuration to Terraform, which is in part defined in the Makefile itself, and in part is defined in my AWS account. In order to not expose my Bucket name, I defined an AWS Parameter store parameter called /terraform/statefiles/bucket
containing the bucket name.
There are different targets for the main terraform operations, the terraform-<something>
will cleanup first the .terraform
and terraform.lock.hcl
and run a clean terraform init
. While the terraform/<something>
will just run the command.
A terraform init + apply can be run with:
make terraform-apply
Or by splitting the operations:
make terraform/clean
make terraform/init
make terraform/apply
To run an infracost breakdown (you will need to have infracost and also be 'logged in', namely acquired a token):
make infracost/breakdown
The fargate and api gateway deployments are aggregated in the main.tf. In that root folder, some of the common resources are also deployed like the waf, the parameter store parameter and a Network setup (also in the main.tf
, which is actually only used by the Fargate deployment).
Every module takes as input variables:
identifier
-> Should identify with a human friendly name the usage of the module / deployment;environment
-> The supposed environment in which we are deploying resources;suffix
-> A unique random id which allows to make sure that the resources created in the deployment have no conflicting names with other already present AWS resources.
Contains a map where the keys are the strings that needs to be replaced with the values.
Defines the Waf Web ACL with the following rules:
- IP rate limit of 1000 per 5 minutes;
- Block malicious known IPs (
AWSManagedRulesAmazonIpReputationList
); - Protection against some common exploitations and vulnerabilities (
AWSManagedRulesCommonRuleSet
).
Metrics and logging are enabled for each rule, so we can check all the blocked/allowed requests and create nice dashboards with the metrics at hand.
The following is the underlying architecure:
- AWS REST Api gateway
- Custom domain with ACM certificate
- Two Stages:
test
andprod
:- Two Api keys, one per stage
- Two Api usage plans, one per stage
- Lamdbas packages automatically built and uploaded to S3
test
stage points to$LATEST
versions of lambdasprod
stage points to custom versions of lambdas- Path and query parameters validation
- Automatic API documentation for the integrations
- Body validation (with
Content-Type: application/json
) - Cloudwatch Alarms for each lambda alias (
test
andprod
) on:- Error count
- Duration of execution
- Number of invocations
- Cloudwatch Alarms for the api gateway on:
- Latency
4xx
returned5xx
returned
- AWS X-Ray to trace requests flows
- No authorizer
This implementation contains all the necessary components to deploy a REST API in AWS with Lambda integrations, where the lambdas can be found in _lambdas folder
All the lambda integrations are defined using Terraform local variables (see here) containing all the necessary information to create both the lambda function itself and the resources for the integration in the api gateway. In particular, these local variables have the following structure:
<identifier_of_the_lambda> = {
lambda = {
<Information for the lambda itself>
}
integration = {
<Information for creating the integration in the API gateway>
}
aliases = {
<Map of Api stages to lambda versions>
}
}
These definitions are used in the api-gateway-integration module, where some defaults are assigned in case some of the parameters are omitted.
In the api-gateway-integration
module, the lambda function itself is created (built and uploaded to S3 making use of the terraform-aws-modules/lambda/aws module) alongside the method, integration, model definition, aliases and documentation resoruces. Moreover, three cloudwatch alarms are created per lambda alias which will alert in case of errors, too long invocation time and too many invocations.
The following is the underlying architecture:
Infrastructure
- AWS ECS Fargate
- CI/CD with CodePipeline using:
- CodeStar connection to Github to source code on push and trigger the pipeline
- CodeBuild to build container
- CodeDeploy to Blue/Green deploy new revision to Fargate
- ECR to store the images
- Elastic Load balancer in front of the Fargate
- Custom CNAME and TLS certificate in ACM
- WAF associated with the load balancer to increase protection
- Complete network setup
- Automatic scaling of Fargate containers based on CPU and Memory usage (triggered when above 70%)
Application
It can be found here: Fargate Api Sample
- NodeJS with Typescript
- ExpressJS for API server implementation
- Authentication via API key generated at infrastructure deployment time
- Unit tests with Mocha and Chai(-http)
- Logging with Winston
- Custom Error Handling route
- Dockerfile building the container image
- Buildspec definition for CodeBuild in order to build the image and push it to the ECR registry
The Fargate module makes use of a bunch of other modules (from my github, in particular: terraform-modules), in particular, by taking a look at main.tf in the terraform/fargate
, we can appreciate the usage of the following modules:
sns
-> Creates an SNS topic and subscribes an email. This SNS will receive notifications regarding the cloudwatch alarms, codepipeline events and app autoscaling events.loadbalancer
-> Creates an Application Load Balancer with two target groups for Blue/Green deployment, Route53 CNAME and ACM certificate.ecs
-> Creates the ECS Fargate cluster, ECR repo to store the images, link the cluster to the load balancer and set the deployment type to be CodeDeploy.autoscaling_ecs
-> Creates the app autoscaling which will autoscale the number of containers in the Fargate Task whenever the CPU or Memory goes above 70%.codepipeline
-> Creates the Codepipeline pipeline which will source (and be triggered by) the repo in githubKevinDeNotariis/fargate-api-sample
, build the docker image and push to ECR in CodeBuild stage and then Blue/Green deploy in the CodeDeploy stage.
Tool | Version |
---|---|
Terraform | 1.5.7 |
GNU Make | 3.81 |
Docker | 24.0.7 |
Python | 3.11.6 |
NodeJS | v21.1.0 |