Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forwards logs to the server if server supports it #361

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

jabberrock
Copy link

@jabberrock jabberrock commented Nov 11, 2024

  1. Adds a new class LogQueue which stores up to a fixed number of log messages in a circular buffer. It stores up to 30 pending log messages of max 255B each which takes up 7.4K of statically allocated memory. There is no dynamic allocation.
  2. Whenever Logger is called, the log message is pushed into the LogQueue.
  3. Whenever the Connection loop is run, it will try to send one pending message to the server.

Note that sending logs is best-effort since it's over UDP.

Related PRs:
SlimeVR/SolarXR-Protocol#153
SlimeVR/SlimeVR-Server#1234

image

1. Adds a new class LogQueue which stores up to a fixed number of log messages in a circular buffer.
2. Whenever Logger is called, the log message is pushed into the LogQueue.
3. Whenever the Connection loop is run, it will try to send one pending message to the server.

Note that sending logs is best-effort since it's over UDP.
else
{
// Overwrite the last message
setMessageAt(m_Count - 1, OverflowMessage);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would probably be a good idea to consider if overriding the first messages would make more sense in this context (tho honestly will never happen)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're probably right, it's better to keep the newest logs rather than the oldest logs.

Copy link
Author

@jabberrock jabberrock Nov 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it's even necessary to send an [OVERFLOW] message? I'm worried once we get to overflow territory, every message we send to the server will just be [OVERFLOW].

Maybe if we can't keep up, we just silently drop (the oldest) messages?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed this by silently dropping earliest message.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think an overflow message would be particularly useful, and in theory it would override another message, would it not?

void setMessageAt(int offset, const char* message);

static constexpr size_t MaxMessages = 100;
static constexpr size_t MaxMessageLength = std::numeric_limits<uint8_t>::max();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd have this defined as a publicly available constexpr variable inside logger or something, that way it isn't duplicated in multiple cases of the codebase.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't really feel duplicated. This constant here limits the size of the LogQueue. The check I added in Connections::sendShortString limits the string that can be sent on that wire format.

It feels weird for Connections to reference logging size, or for LogQueue to reference wire format size.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

E.g. If we wanted to send longer messages, then we'd have to switch to Connections::sendLongString, which has a very different limit. We wouldn't use std::numeric_limits<ulong_t>::max() for our MaxMessageLength.

else
{
// Overwrite the last message
setMessageAt(m_Count - 1, OverflowMessage);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's probably worth considering, if maybe overriding the oldest message would make more sense in this context (though it will never happen hopefully).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed this by silently dropping earliest message.

static constexpr char OverflowMessage[] = "[OVERFLOW]";

template <typename T, T Modulus>
class Modulo {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modulo should probably move out of the logging namespace and into something more fitting for it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a private class at the moment. I didn't want to move it out because it doesn't support all the necessary functions, nor does it take care of general edge cases.

Copy link
Author

@jabberrock jabberrock Nov 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'll just get rid of the helper class and move the modulo logic into the LogQueue implementation. I was being too fancy.

Copy link
Author

@jabberrock jabberrock Nov 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed this by dropping the Modulo helper class.

@jabberrock
Copy link
Author

I reduced the max messages from 100 to 30 because I was seeing some OOM panics. Have not seen any more OOM since then, but I'm curious how the memory budget is allocated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants