-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restore improvement: batch retry #4071
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Now, if batch restoration failed on one node, it can still be retried by other nodes. Failed node is no longer used for the restore. Fixes #4065
This commit adds TestRestoreTablesBatchRetryIntegration, which injects errors during download and LAS step and validates that restore finished successfully despite them (thanks to batch retries).
After giving it some more thought, I decided to flatten workload structure. Instead of having location/table/dir layers, now everything operates on the dir layer. It makes the implementation easier, especially for the upcoming changes related to node retries.
Now, even if host failed to restore given batch it can still try to restore batches originating from different dcs. This improves retries in general, but also should help with #3871.
This commit extends TestBatchDispatcher to include failures in its scenario and to validate host retry in a different datacenter.
@karol-kokoszka This PR is ready for review! |
Previously Workload structure was created during indexing and was updated during batching in order to keep its progress. This was confusing, because it wasn't obvious whether size and SSTable fields were describing the initial Workload state or the updated one. This commit makes it so Workload structure is not changed during batching. Instead, workloadProgress was added to in order to store batching progress. Moreover, this commit also adds a lot of documentation about batchDispatcher internal behavior.
After re-reading the code, I discovered 3 areas which need to be improved:
I will include those fixes in this PR today. |
Consider a scenario with parallel=1 and multi-dc and multi-location. Note that SM is using 'parallel.Run' for restoring in parallel. Note that previous batching changes made host hang in 'batchDispatcher.DispatchBatch' if there were no more SSTables to restore, because it was still possible that another node failed to restore some SSTables, so that the hanging host could be awakened and restore failed SSTables returned to batchDispatcher. All of this meant that batching process could hang, because 'parallel.Run' would allow only a single host to restore SSTables at the time, but batching mechanism wouldn't free it until all SSTables are restored. Another scenario when batching mechanism could fail would be that all hosts failed (with re-tries) to restore all SSTables. Because of that, I changed batching mechanism to be more DC oriented. Now, 'workloadProgress' keeps track of remaining bytes to be restored per DC, and it also keeps host DC access instead of location access (the assumption being that a single DC can be backed up to only single location). This information allow to free hosts that can't restore any SSTables because they either already failed to restore some SSTables from given DCs, or all SSTables from given DCs were already restored.
I'm not sure if previous behavior was bugged, but changes introduced in this commit should make it more clear that batching mechanism respects context cancellation. This commit also adds a simple test validating that pausing restore during batching ends quickly.
Michal-Leszczynski
force-pushed
the
ml/restore-retry
branch
from
October 21, 2024 19:00
1d3db02
to
2eb9e4a
Compare
@karol-kokoszka PR is ready for re-review! |
karol-kokoszka
approved these changes
Oct 22, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds 2 changes related to work orchestration during download and LAS stage.
After batch failed to be restored, SM cleans the upload dir, so that there are no left overs there.
Those changes aim to make restore more robust, and should help with #3871.
Moreover, this PR also refactors general areas connected to batching:
Ref #3871
Fixes #4065