-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS S3 CloudWatch connector can only be used with a single CloudWatch log group #9056
Comments
Thank you for submitting an Issue to the Azure Sentinel GitHub repo! You should expect an initial response to your Issue from the team within 5 business days. Note that this response may be delayed during holiday periods. For urgent, production-affecting issues please raise a support ticket via the Azure Portal. |
Hello @paulschwarzenberger, thanks for flagging this issue, we will soon get back to you on this. Thanks! |
Hi @paulschwarzenberger, we are checking this issue with team, we will share you update on this. Thanks! |
The script also has hardcoded timestamps. |
Hi @paulschwarzenberger, as this is an enhancement issue, we received the update from data collection team, the CloudWatch logs offers flexibility schema. (Ex. Message contains the informative information). we would like to extract the log group, stream, and account name and direct them to specific columns. It seems difficult to establish a universal method of extracting this information directly from the message itself given that each customer's CloudWatch logs may vary. |
Hi @paulschwarzenberger, hope you are doing well. We are waiting for your response on above comment. Thanks! |
In our opinion, Microsoft Sentinel should provide a connector solution for AWS CloudWatch logs which is highly scalable and doesn't require custom configuration by each customer. As I described, the current solution isn't usable in practice because it only works with a single CloudWatch log group whereas all AWS customers have many CloudWatch log groups, often 100s or 1,000s. Your example Lambda function isn't currently usable, because it's not practical to have a separate Lambda function for every single log group. The AWS recommended approach for CloudWatch log aggregation is to use a central S3 bucket, Firehose data stream(s) and CloudWatch Log Subscription Filters as detailed in this link. I suggest that you update the Sentinel AWS S3 connector for CloudWatch to ingest logs delivered to S3 in this manner. An example log from Firehose data stream is: The above log format produced by Kinesis Data Firehose delivery streams for CloudWatch Log Subscription filters is consistent and predictable, so ideal for mapping values into a standard Sentinel data table. In our implementation, we use Kinesis Data Firehose delivery streams from CloudWatch Log Subscription Filters, going to a S3 bucket for pre-processed CloudWatch data, which triggers a Lambda function to transform the data and copy to another S3 bucket that is integrated with Sentinel via the AWS S3 CloudWatch connector. An alternative architecture might be to use a data transform Lambda within the FireHose delivery stream, however I haven't tested that. The Lambda transform function used in our implementation is based on your example, and extracts the fields from the I'd like to see the Sentinel team modify the CloudWatch data table, adding columns for I'd be happy to show you example infrastructure on a call, and I can provide a copy of our Lambda transform code if that would be helpful. |
Hi @paulschwarzenberger, this is an enhancement issue, we are reaching out to data collection team. Once we receive an update from the team, we will update you. Thanks! |
Hi @paulschwarzenberger, could you please share you mail id with us? Thanks! |
No problem, it's |
Hi @paulschwarzenberger, Thanks for sharing the Mail ID with us. We will share these details with our data collection team, So, team can reach out to you to get the information they require. Thanks! |
Just want to +1 Paul's excellent description of the problem set and suitable solutions. |
@v-sudkharat Just wondering if there are any updates on the firehose solution suggested by @paulschwarzenberger |
Describe the bug
The Sentinel AWS S3 CloudWatch connector uses a Sentinel data table for CloudWatch, which only has two columns that can be populated by the ingested event: timestamp and message. This works fine for a single CloudWatch log group, but most AWS customers have 100s or 1000s of CloudWatch log groups. There's currently no way to tell which AWS account the event is coming from or which log group or log stream is generating the event.
It's therefore not practical to create meaningful alerts or incidents, e.g. a particular event occurred in the production AWS account.
For this to be a workable solution for enterprise customers, 3 additional columns should be added to the CloudWatch data table: AWS Account ID, CloudWatch log group name, CloudWatch log stream name.
To Reproduce
Steps to reproduce the behavior:
AWSCloudWatch
Expected behavior
Screenshots
CloudWatch events ingested to Sentinel:
Additional context
The documentation and example code is not an efficient architecture for an enterprise. A separate Lambda function would be required for every CloudWatch log group. The recommended approach from AWS is to use CloudWatch Log Subscription filters, Kinesis stream to a single S3 bucket and then a single Lambda function to transform the data into the correct format for Sentinel. Please contact me via LinkedIn if you'd like a demo of how we've implemented this.
However this is a peripheral issue, the most important point is the need for 3 additional columns in the CloudWatch data table.
The text was updated successfully, but these errors were encountered: