Skip to content

Commit

Permalink
Merge pull request #2461 from kiranram/kiramram-feature-cloudwatch-me…
Browse files Browse the repository at this point in the history
…tric-streams-firehose-terraform

New serverless pattern for cloudwatch metric streams to kinesis firehose
  • Loading branch information
julianwood authored Oct 7, 2024
2 parents f186dee + 7495da7 commit 54698d1
Show file tree
Hide file tree
Showing 6 changed files with 404 additions and 0 deletions.
53 changes: 53 additions & 0 deletions cloudwatch-metric-streams-firehose-terraform/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Amazon CloudWatch Mertics streaming using Amazon Data Firehose with Terraform

This pattern demonstrates how to create the Amazon CloudWatch Metric Streams to Amazon Data Firehose. Metrics are saved to S3 from Amazon Data Firehose. Metric selection is also demonstrated to stream only certain metrics related to certain AWS services to be sent from Cloudwatch to Amazon Data Firehose.

Learn more about this pattern at Serverless Land Patterns: https://serverlessland.com/patterns/cloudwatch-metric-streams-firehose-terraform

Important: this application uses various AWS services and there are costs associated with these services after the Free Tier usage - please see the [AWS Pricing page](https://aws.amazon.com/pricing/) for details. You are responsible for any AWS costs incurred. No warranty is implied in this example.

## Requirements

* [Create an AWS account](https://portal.aws.amazon.com/gp/aws/developer/registration/index.html) if you do not already have one and log in. The IAM user that you use must have sufficient permissions to make necessary AWS service calls and manage AWS resources.
* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) installed and configured
* [Git Installed](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
* [Terraform](https://www.terraform.io/) installed

## Deployment Instructions

1. Create a new directory, navigate to that directory in a terminal and clone the GitHub repository:
```
git clone https://github.com/aws-samples/serverless-patterns
```
2. Change directory to the pattern directory:
```
cd serverless-patterns/cloudwatch-metric-streams-firehose-terraform
```
3. Run below terraform commands to deploy to your AWS account in the desired region (default is eu-west-2):
```
terraform init
terraform validate
terraform plan -var region=<YOUR_REGION>
terraform apply -var region=<YOUR_REGION>
```
## How it works
When AWS services are provisioned, the listed metrics(in the IaC) will be captured and streamed to Amazon Data Firehose. The destination in this case is a S3 bucket, where the metrics are saved. The code is configured to eu-west-2, but can be changed to any desired region via CLI as shown above. The example code includes AWS/EC2 and AWS/RDS namespaces with couple of metrics in each, which can be easily changed or new ones appended with new namespaces and/or metrics as required.
![pattern](Images/pattern.png)
## Testing
After deployment, launch an EC2 instance in the same region, and after a few minutes the metrics data will appear in the S3 bucket. The file is in GZIP format and has metrics saved as JSON objects.
## Cleanup
1. Delete the stack:
```
terraform destroy -var region=<YOUR_REGION>
```
----
Copyright 2024 Amazon.com, Inc. or its affiliates. All Rights Reserved.
SPDX-License-Identifier: MIT-0
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
{
"title": "CloudWatch Metric Streams to Amazon Data Firehose",
"description": "Create CloudWatch Metric stream using Amazon Data Firehose and save them in Amazon S3",
"language": "",
"level": "300",
"framework": "Terraform",
"introBox": {
"headline": "How it works",
"text": [
"This pattern sets up Amazon CloudWatch Metric stream and associates that with Amazon Data Firehose. Through this setup you can continuously stream metrics to a destination of choice with near-real-time delivery and low latency. There are various destinations supported, which include Amazon Simple Storage Service (S3) and several third party provider destinations like Datadog, NewRelic, Splunk and Sumo Logic, but in this pattern we use S3. This setup also provides capability to stream all CloudWatch metrics, or use filters to stream only specified metrics. Each of the metric streams can include up to 1000 filters that can either include or exclude namespaces or specific metrics. Another limitation for a single metric stream is it can either include or exclude the metrics, but not both. If any new metrics are added matching the filters in place, an existing metric stream will automatically include them.",
"Traditionally, AWS customers relied on polling CloudWatch metrics using API's, which was used in all sorts of monitoring, alerting and cost management tools. Since the introduction of metric streams, customers now have the ability to create low-latency scalable streams of metrics with ability to filter them at a namespace level, for example to include or exclude metrics at a namespace level. Further to that, if there is a requirement to filter at a more granular level, Metric Name Filtering in metric streams comes into play, addressing the need for more precise filtering capabilities.",
"One of the good features of metric streams is that, it allows you to create metric name filers on metrics which may not exist yet on your AWS account. For example, you can define metrics for AWS/EC2 namespace if you know that the application will produce metrics for this namespace, but that application may yet to be deployed in the account. In this case those metrics will not exist in your AWS account unless the service is provisioned.",
"This pattern also creates the required roles and policies for the services, with the right level of permissions required. The roles and policies can be expanded if additional services come into play, based on principle of least privilege."
]
},
"gitHub": {
"template": {
"repoURL": "https://github.com/aws-samples/serverless-patterns/tree/main/cloudwatch-metric-streams-firehose-terraform",
"templateURL": "serverless-patterns/cloudwatch-metric-streams-firehose-terraform",
"projectFolder": "cloudwatch-metric-streams-firehose-terraform",
"templateFile": "main.tf"
}
},
"resources": {
"bullets": [
{
"text": "Use metric streams to continually stream CloudWatch Metrics",
"link": "https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html"
},
{
"text": "Amazon Data Firehose - Streaming Data Pipeline",
"link": "https://aws.amazon.com/firehose/"
},
{
"text": "Amazon S3 - Cloud Object Storage",
"link": "https://aws.amazon.com/s3/"
}
]
},
"deploy": {
"text": [
"terraform init",
"terraform plan",
"terraform apply"
]
},
"testing": {
"text": [
"In the same account and region, launch an EC2 instance. You should be able to see metrics arrive n S3 bucket in few minutes."
]
},
"cleanup": {
"text": [
"terraform destroy"
]
},
"authors": [
{
"name": "Kiran Ramamurthy",
"image": "n/a",
"bio": "I am a Senior Partner Solutions Architect for Enterprise Transformation. I work predominantly with partners and specialize in migrations and modernization.",
"linkedin": "kiran-ramamurthy-a96341b",
"twitter": "n/a"
}
],
"patternArch": {
"icon1": {
"x": 20,
"y": 50,
"service": "cloudwatch",
"label": "Amazon CloudWatch"
},
"icon2": {
"x": 50,
"y": 50,
"service": "kinesis-firehose",
"label": "Amazon Kinesis Firehose"
},
"line1": {
"from": "icon1",
"to": "icon2",
"label": "Mertics"
},
"icon3": {
"x": 80,
"y": 50,
"service": "s3",
"label": "Amazon S3"
},
"line2": {
"from": "icon2",
"to": "icon3",
"label": "Mertics"
}
}
}
66 changes: 66 additions & 0 deletions cloudwatch-metric-streams-firehose-terraform/example-pattern.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
{
"title": "CloudWatch Metric Streams to Amazon Data Firehose",
"description": "Create CloudWatch Metric stream using Amazon Data Firehose and save them in Amazon S3",
"language": "",
"level": "300",
"framework": "Terraform",
"introBox": {
"headline": "How it works",
"text": [
"This pattern sets up Amazon CloudWatch Metric stream and associates that with Amazon Data Firehose. Through this setup you can continuously stream metrics to a destination of choice with near-real-time delivery and low latency. There are various destinations supported, which include Amazon Simple Storage Service (S3) and several third party provider destinations like Datadog, NewRelic, Splunk and Sumo Logic, but in this pattern we use S3. This setup also provides capability to stream all CloudWatch metrics, or use filters to stream only specified metrics. Each of the metric streams can include up to 1000 filters that can either include or exclude namespaces or specific metrics. Another limitation for a single metric stream is it can either include or exclude the metrics, but not both. If any new metrics are added matching the filters in place, an existing metric stream will automatically include them.",
"Traditionally, AWS customers relied on polling CloudWatch metrics using API's, which was used in all sorts of monitoring, alerting and cost management tools. Since the introduction of metric streams, customers now have the ability to create low-latency scalable streams of metrics with ability to filter them at a namespace level, for example to include or exclude metrics at a namespace level. Further to that, if there is a requirement to filter at a more granular level, Metric Name Filtering in metric streams comes into play, addressing the need for more precise filtering capabilities.",
"One of the good features of metric streams is that, it allows you to create metric name filers on metrics which may not exist yet on your AWS account. For example, you can define metrics for AWS/EC2 namespace if you know that the application will produce metrics for this namespace, but that application may yet to be deployed in the account. In this case those metrics will not exist in your AWS account unless the service is provisioned.",
"This pattern also creates the required roles and policies for the services, with the right level of permissions required. The roles and policies can be expanded if additional services come into play, based on principle of least privilege."
]
},
"gitHub": {
"template": {
"repoURL": "https://github.com/aws-samples/serverless-patterns/tree/main/cloudwatch-metric-streams-firehose-terraform",
"templateURL": "serverless-patterns/cloudwatch-metric-streams-firehose-terraform",
"projectFolder": "cloudwatch-metric-streams-firehose-terraform",
"templateFile": "main.tf"
}
},
"resources": {
"bullets": [
{
"text": "Use metric streams to continually stream CloudWatch Metrics",
"link": "https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Metric-Streams.html"
},
{
"text": "Amazon Data Firehose - Streaming Data Pipeline",
"link": "https://aws.amazon.com/firehose/"
},
{
"text": "Amazon S3 - Cloud Object Storage",
"link": "https://aws.amazon.com/s3/"
}
]
},
"deploy": {
"text": [
"terraform init",
"terraform plan",
"terraform apply"
]
},
"testing": {
"text": [
"In the same account and region, launch an EC2 instance. You should be able to see metrics arrive n S3 bucket in few minutes."
]
},
"cleanup": {
"text": [
"terraform destroy"
]
},
"authors": [
{
"name": "Kiran Ramamurthy",
"image": "n/a",
"bio": "I am a Senior Partner Solutions Architect for Enterprise Transformation. I work predominantly with partners and specialize in migrations and modernization.",
"linkedin": "kiran-ramamurthy-a96341b",
"twitter": "n/a"
}
]
}
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
152 changes: 152 additions & 0 deletions cloudwatch-metric-streams-firehose-terraform/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
provider "aws" {
region = var.region

default_tags {
tags = {
metrics-test = "aws-metric-streams"
}
}
}

data "aws_availability_zones" "available" {
state = "available"
}

data "aws_caller_identity" "current" {}

# Define role for firehose to send metrics to S3
resource "aws_iam_role" "firehose_to_s3" {
name_prefix = "test_streams"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "firehose.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

# Define a policy for permissios to write to S3
resource "aws_iam_role_policy" "firehose_to_s3" {
name_prefix = "test_streams"
role = aws_iam_role.firehose_to_s3.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:AbortMultipartUpload",
"s3:GetBucketLocation",
"s3:GetObject",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:PutObject"
],
"Resource": [
"${aws_s3_bucket.metric_stream.arn}",
"${aws_s3_bucket.metric_stream.arn}/*"
]
}
]
}
EOF
}

# Associate the IAM role
resource "aws_iam_role" "metric_stream_to_firehose" {
name_prefix = "test_streams"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "streams.metrics.cloudwatch.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}

resource "aws_iam_role_policy" "metric_stream_to_firehose" {
name_prefix = "test_streams"
role = aws_iam_role.metric_stream_to_firehose.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "${aws_kinesis_firehose_delivery_stream.metrics.arn}"
}
]
}
EOF
}

# Create the S3 bucket to hold the metrics
resource "aws_s3_bucket" "metric_stream" {
bucket = "test-streams-${data.aws_caller_identity.current.account_id}-${var.region}"

tags = var.tags

# 'true' allows terraform to delete this bucket even if it is not empty.
force_destroy = true
}

# Create the Amazon Data Firehose instance
resource "aws_kinesis_firehose_delivery_stream" "metrics" {
name = "test_streams"
destination = "extended_s3"

extended_s3_configuration {
role_arn = aws_iam_role.firehose_to_s3.arn
bucket_arn = aws_s3_bucket.metric_stream.arn

compression_format = var.s3_compression_format
}

}

# Create the metric streams for the desired services
resource "aws_cloudwatch_metric_stream" "metric-stream" {
name = "test_streams"
role_arn = aws_iam_role.metric_stream_to_firehose.arn
firehose_arn = aws_kinesis_firehose_delivery_stream.metrics.arn
output_format = var.output_format


# There can be an exclude_filter block, but it is
# mutually exclusive with the include_filter, which means
# you can have one of them at any time.

include_filter {
namespace = "AWS/EC2"
metric_names = ["CPUUtilization", "NetworkOut"]
}

include_filter {
namespace = "AWS/RDS"
metric_names = ["CPUUtilization", "DatabaseConnections"]
}

tags = var.tags
}
Loading

0 comments on commit 54698d1

Please sign in to comment.