Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cloudwatch_out not working - logs never show up in CloudWatch #70

Open
deweller opened this issue Jun 3, 2017 · 4 comments
Open

cloudwatch_out not working - logs never show up in CloudWatch #70

deweller opened this issue Jun 3, 2017 · 4 comments

Comments

@deweller
Copy link

deweller commented Jun 3, 2017

Put simply, I just can't get this to work.

Here is my config:

  <match applog.**>
    @type cloudwatch_logs
    log_group_name application-logs
    use_tag_as_stream true
    auto_create_stream true
    message_keys message
  </match>

The log group and the stream are created as expected, but no entries ever show up.

There is no information in the fluentd logs.

Does this plugin work? Is anyone else experiencing this issue? How can I debug this?

Thanks.

@tkob
Copy link
Contributor

tkob commented Oct 26, 2017

I'm experiencing the same problem (streams created, no log events appear). The config is similar (use_tag_as_stream=true, auto_create_stream=true, but no message_keys).

When fluentd is launched with -v option, debug messages saying "Calling PutLogEvents API" are emitted. The weird thing is that the debug messages themselves are written to "fluent.debug" stream.

@tkob
Copy link
Contributor

tkob commented Oct 30, 2017

For my case, I found the cause:

  • Fluentd receives from remote syslog server which uses JST(+09:00) timezone, while the fluentd server uses UTC(+00:00).
  • RFC3164 timestamps do not have timezone.
  • By default, fluentd syslog input plugin uses remote timestamps as the time of events. This causes timestamp discrepancy (+00:00 vs. +09:00).
  • AWS PutLogEvents skips events which are more than 2 hours in the future.
  • Current version of this plugin does not report skipped log events.

I do not know whether this applies to @deweller's case, but similar cause is very probable, I think.

My workaround is to edit regexp of syslog input format so that it does not use remote timestamp as event's timestamp.

@uover82
Copy link

uover82 commented Feb 13, 2019

Hi,
Is this a current issue? I'm experiencing something similar - worker log data w/ no noticeable errors, yet no CloudWatch logs. If so, could fluent.conf configuration (localtime, timezone, etc) be a viable alternative solution?
Thanks

@adamcheney
Copy link

Yeah, I'm also experiencing the same issue. I'll take a look at the timestamps thing, but in our case (we're based in NZ) UTC timestamps will be around 13 hours behind. Otherwise, same symptoms as the original poster - log group and log stream created without problems but no actual log events; trace level logging shows:

2019-03-28 01:41:49 +0000 [trace]: #0 executing sync write chunk="5851da4c65301cbeb92c43f8e2e5f113"
2019-03-28 01:41:52 +0000 [debug]: #0 Called PutLogEvents API group="k8s-workloads" stream="kubexray" events_count=5 events_bytesize=6465 sequence_token=nil thread=70351681997220 request_sec=0.246174792
2019-03-28 01:41:52 +0000 [trace]: #0 write operation done, committing chunk="5851da4c65301cbeb92c43f8e2e5f113"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants