-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@logtail/pino can't be flushed in serverless environments #112
Comments
Hi @roberttod! Thank you for reaching out! I think you are right that transport running in a worker thread could be causing issues here. Thanks again for raising this. |
Any updates on this? We are having the same issue hosting on Vercel. |
Can we bump this up? |
Hi @roberttod , @Philitician and @ckblockit Thanks for reporting this and for your patience. I tried my best to go through reported issues in Pino, the docs, and couple other transports to see whether we're doing anything incorrectly in our code. Since we're using the abstract transport library, and are registering the flush callback on close, I believe it should be solid. I've tried to replicate the issue using const pino = require('pino');
const transport = pino.transport({
target: "@logtail/pino",
options: { sourceToken: 'MY_BETTERSTACK_TOKEN' }
});
const logger = pino(transport);
exports.hello = async (event) => {
logger.info("Hello from serverless")
return {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless v4.0! Your function executed successfully!'
})
};
}; service: test-log
provider:
name: aws
runtime: nodejs20.x
functions:
hello:
handler: handler.hello And I got 100% reliability of logs being sent to Better Stack. Could you please point to me anything I might have missed? A reproducible code example would be the best. Maybe it's an issue with a specific dependency version, or a serverless setup. I'll be happy to jump right back into it. @ckblockit Are you having more Pino transports set up simultaneously where only Better Stack is experiencing dropped logs? If so, could you share which other transports are you using, so I can check for any significant differences? |
Are the logs being dropped intermittent, or does it happen every time in a certain setup? |
Hi @PetrHeinz Thanks for getting back to this issue. We haven't seen any dropped l made some changes to our setup.
To answer your question we are using 2 transports
We also have a hook setup to send exceptions to Sentry.
The way we verified log dropping was by checking against Sentry, stdout <> BetterStack. I will continue report here if we notice any dropping again. |
I am using RedwoodJS on Netlify functions for our app, and so I need to use the pino logtail transport for logging.
I noticed recently that logs are going missing, especially when they are at the end of a chain of calls to a lambda function. I fixed this by a) implementing my own Writable stream which sends to Betterstack using @logtail/node - this is because I suspected that transports might be the problem as they run in a different thread b) awaiting logtail.flush() after any of my lambdas are complete.
Is there a way to resolve this without all the custom code? I tried using pino.flush() but I don't think it's actually calling logtail.flush(), and I think that's the problem (i.e. it might not be pino transports that are the issue). I see that logtail.flush is called in closeFunc in the transport, but I don't think that gets called on pino.flush().
The text was updated successfully, but these errors were encountered: