-
Notifications
You must be signed in to change notification settings - Fork 806
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upload: Auto adjust maxChunkSize until upload succeeds after HTTP-413 #4826
Conversation
67be42c
to
9ef12fb
Compare
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #4826 +/- ##
==========================================
- Coverage 60.79% 57.18% -3.61%
==========================================
Files 145 138 -7
Lines 18836 17187 -1649
==========================================
- Hits 11451 9829 -1622
+ Misses 7385 7358 -27
|
36f3b56
to
5ee415f
Compare
Hi @allexzander, are you ok with the updates to your review or do you expect additional changes? |
Thanks for your pull request 👍 It's a good improvement to handle a 413 response properly. I wonder if it's possible to make this feature work without a new configuration option. When I read the existing code correct we have some calculation to increase the chunk size step by step. Could we store the last working chunk size and fallback to it on 413 response? I guess there is nothing wrong about an additional configuration but we already have 4 for the chunked uploads 🤔 |
Hi @kesselb, I see your point. The reason for the config value is primarily to have a starting point for the auto-sensing that doesn't hurt performance too much (since both cases, connection close and HTTP-413 are handled the same for uploads larger Guess what can be done is to have a built-in range for This ensures that performance cannot get too bad or stay bad when limits are changed again. Only question is whether this should be reset at some time or stay in the DB forever? |
5ee415f
to
d93c768
Compare
Hi @kesselb, @allexzander, Have updated the implementation as follows:
|
The latest commit addresses |
@mgallien WDYT? |
@mgallien as this is also causing clients to fail to sync, perhaps it'd be good to make it part of the fixes for reliable syncing? |
If it takes more time to finish this pull request, we could also revert #3600 maxChunkSize = 1000 is only possible when the server accepts such a big file. |
reverting it while a better fix is worked on would be appreciated. |
Btw. I'm happy to update the PR if anyone can find the time to review it. Nevertheless reducing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello, we are really sorry this slipped through the cracks for so long -- this is a fantastic contribution, thank you.
Would you be willing to fix the merge conflicts and make this mergeable again? If not, we are happy to cherry-pick your commits and fix it ourselves
@claucambra, will look into it this week |
AppImage file: nextcloud-PR-4826-09be2f7e8139a39a115396d77a8cf666b548bcb4-x86_64.AppImage |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
Hi team, what is needed to get this to the finishing line & get it merged? |
@jospoortvliet, missing are fixes for unit tests. In the test system I had access to: Win Server 2016 (x64), MacOS (x64), I had no test failures (for the parts that are supported, Win2016 seems missing some parts, but the test that failed here was working fine). Another Win11 instance created some other issues but that ran on ARM64, so not the most reliable result. To be honest I wasn't so sure that the changes made here are affecting the test failures as the results vary quite a bit depending on the used OS. |
Signed-off-by: Matthieu Gallien <matthieu_gallien@yahoo.fr>
Is it possible to revive this PR? I had to fiddle around the chunk size config on my fresh new system to get it to work. It seems like this PR would modify the parameters on the fly dynamically, which could solve most such problems where the reverse proxy or whatever doesn't like the ootb chunk size. Adding the following configs helped me in my current case on mac: chunkSize=10000000 |
Up! Please take a look at this team! |
@mgallien @allexzander Could you look at this PR? This fix is waiting too long. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Hello, Thank you for your contribution to the Desktop Client with this pull request. We truly value your support and hope to see more contributions from you in the future! Best regards, |
Wait, this is for an active issue which had comments on it a few weeks ago and were ignored. The reason it was not accepted is that your TESTS ARE BROKEN, from what I can see. You're closing it because you can't fix it or what? Issue #4278 == Won't fix? Is this a dumb joke? |
Hello, |
that's why I migrated all clients to owncloud |
I wouldn't count "basic file sync functionality" as a "small thing" personally. |
Hello, please let me explain the reason for this particular closing, which was not documented in the closing properly. The feature is not isolated to one desktop client. It affects all clients: Web, Linux, Windows, macOS, macOS VFS, Android, iOS The server implementation was performed here: The corresponding desktop implementation will be tracked here I hope this clears the confusion and I excuse myself for not putting this in the original closing post |
#7347 tracks the use of this feature that was added to the server: nextcloud/server#48758 Instead of this PR's clever logic to guess the max chunk size, clients can now ask the server how big the max chunk size is. Nice! |
Currently when Nextcloud runs behind a server with an upload limit, uploading a large file (e.g. several 100M) will likely fail when
nextcloud.cfg
is not manually adjusted on every endpoint to limitmaxChunkSize
(assuming an upload limit is usually not set to 1G but more likely 100M or lower).This PR implements auto-adjustment of the max upload size when receiving
HTTP-413
(request entity too large) or a close connection error after large uploadsThis is achieved with the following strategy:
failsafeMaxChunkSize
(default100M
).HTTP-413
is received:requestSize > failsafeMaxChunkSize
, the failsafe max size applies and the sync is retried immediately.maxChunkSize = requestSize / 2
and the sync is retried until it succeeds orminChunkSize
is reached (default1M
).HTTP-413
ifrequestSize > failsafeMaxChunkSize
. HTTP usually requires that requests are fully consumed, servers may decide to close the HTTP connection when too much data is sent to prevent this. As connection close is not a specific error, only large uploads are handled by this change.While
failsafeMaxChunkSize
can be adjusted innextcloud.cfg
or via env varOWNCLOUD_FAILSAFE_MAX_CHUNK_SIZE
, it should have a reasonable default so that it isn't required. When set properly it limits the amount of retries needed and enables uploads for the cases where the HTTP connection is closed as response to sending too much data.For cases where an upload limit in the web server is greater
1M
and a connection close error is only received for uploads>100M
, the default values allow all uploads to succeed without any additional configuration.This should fix most cases reporting errors in this area like #4278 and #925.
For 2 concurrent uploads (~0.5GB each) and the nginx frontend limited to 100M this looks like this (in an earlier implementation without failsafeMaxChunkSize):
After 3 attempts uploads succeed.
Hope this is accepted. Feedback is welcome.