Skip to content
This repository has been archived by the owner on Sep 20, 2022. It is now read-only.

Race-condition triggered when reading events takes too long #13

Open
raltnoeder opened this issue Jan 7, 2019 · 2 comments
Open

Race-condition triggered when reading events takes too long #13

raltnoeder opened this issue Jan 7, 2019 · 2 comments
Labels

Comments

@raltnoeder
Copy link
Member

Problem description
When reading events from drbdsetup takes too long, e.g. when there are lots of resources and therefore lots of event lines to parse, then the ResourceCollection.prune() method in pkg/update/update.go will remove resources that have just been added in the same events collection cycle, because the time stamp is already too old.
The result is that resources are either never displayed, or, depending on the exact timing, the display may flap from showing the resources to not showing them, or possibly showing only some resources.

Steps to reproduce
Run with lots of resources configured (seen with 2000 resources on our pina/colada test systems). The problem does not seem to affect resources that are not started - those are always listed.

Possible solution
It may make sense to do a single prune() run before starting a new events poll cycle, so that no matter how long each cycle takes, it would always show at least the resources that were seen in the last cycle.

@raltnoeder raltnoeder added the bug label Jan 7, 2019
@deleteriousEffect
Copy link
Contributor

Well that's neat. I'd have to look into that code again, but I bet there was a reason I made it based on time. If not, or if that's ultimately not workable, I suspect that the solution would be to send some kind of barrier (or whatever) at the end of each polling cycle prune events that are x cycles old. I think this might cause an issue if there were multiple independent sources of data that were sending to the data aggregation layer.

If you can attach a fine containing a 2000+ resource output of drbdsetup events2 --timestamps --statistics --now I could go through and look at the performance data. There might be a bottleneck that could be worked with.

You can also increase the polling time, drbdtop --interval=5s, this was developed with the intent for use with clusters with lots of resources.

@raltnoeder
Copy link
Member Author

raltnoeder commented Jan 8, 2019

As far as I understand the code (which may be wrong...), it seems that prune() was supposed to remove entries that were not updated within the last 3 poll cycles - based on the poll interval time.
I opened a pull request with a partial fix - it may still prune entries that were updated more recently than within the last 3 poll cycles, but it will prevent pruning entries that were updated by the current poll cycle, so that currently active resources will always be displayed.

I changed pruning so that it is triggered once after each poll cycle by the collector (a different collector, e.g. an event-based one, may still instead trigger it once per received event), and the timestamp for pruning is based on the earliest event timestamp from the current poll cycle.
This fixes most possible issues, except for a possible drbdtop/drbdsetup timestamp difference in the case where no events are received from drbdsetup (e.g. due to all resources having been deactivated).

It also makes parsing noticeably faster, because it makes the prune operation per poll-cycle approximately an O(n) operation instead of the previous O(e * n), where e is the number of events received and n is up to the number of resources (depending on whether or not, and when it prunes entries, and how many of them, it's complicated...).

It's not perfect, but it seems that it fixes most issues and does not add any new breakage, so I guess it might just be good enough for now.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants