fix(quota): protect local files from deletion when quota blocked upload.#10001
fix(quota): protect local files from deletion when quota blocked upload.#10001camilasan wants to merge 8 commits into
Conversation
1508fc1 to
6a96d3a
Compare
6a96d3a to
cb01eb1
Compare
|
/backport to stable-33.0 |
b352e44 to
cbd937a
Compare
ca8d999 to
3ea73cb
Compare
… upload. If a file failed to upload because the server quota was exceeded (HTTP 507), it was never actually stored on the server but the sync client could still delete or the local copy when the remote parent folder was later moved or deleted via the web interface. - checkNewDeleteConflict() now checks the error blacklist before issuing a local REMOVE. If the file carries an InsufficientRemoteStorage entry, it is kept locally and reported as a soft error instead. - The same _httpErrorCode = 507 is now also set in the discovery-phase quota check. - renameErrorBlacklistPaths() updates all blacklist entries under the renamed path at move propagation time, keeping the InsufficientRemoteStorage entry aligned with the file's actual location across rename cycles. - Add renameErrorBlacklistPaths call to PropagateLocalRename. Without it, quota blocked files in a server renamed folder still lost their InsufficientRemoteStorage entry, leaving them unprotected when the renamed folder was subsequently deleted. - Add tests. Signed-off-by: Camila Ayres <hello@camilasan.com>
8ecaa6f to
955baf7
Compare
When a local file is blocked from uploading due to remote quota (HTTP 507) and the parent folder is subsequently deleted on the server, checkErrorBlacklisting() in would overwrite the deliberate CSYNC_INSTRUCTION_ERROR set by checkNewDeleteConflict() with a generic CSYNC_INSTRUCTION_IGNORE so the user would loose the error message with the reason for the permanence of some files. Also update tests. Signed-off-by: Camila Ayres <hello@camilasan.com>
…tead of stopping. When an upload fails with HTTP 507 (Insufficient Storage), the blacklist entry was given a growing _ignoreDuration, causing the file to be silently ignored on subsequent syncs even after the user freed storage quota. Set _ignoreDuration = 0 for InsufficientRemoteStorage entries so that the file retries on every sync cycle. The blacklist entry is still written (checkNewDeleteConflict can still see it to protect the file from accidental local deletion), but checkErrorBlacklisting treats duration 0 entries as immediately expired and lets the upload proceed. Update testQuotaBlockedFileProtectedAfterParentFolderMoveThenDelete to keep the 507 error active during the delete parent sync so the test verifies the correct new behavior: the file is reported as a failed new upload (DetailError) rather than a silently-skipped BlacklistedError. Signed-off-by: Camila Ayres <hello@camilasan.com>
Verifies that setting _ignoreDuration = 0 for HTTP 507 errors causes the file to retry and upload successfully on the very next sync once quota is freed, rather than waiting out an exponential backoff. Signed-off-by: Camila Ayres <hello@camilasan.com>
ca3cd83 to
49d5c06
Compare
When a folder containing files that failed to upload because of quota errors was renamed on the server, the sync client correctly followed the rename but emitted no storage notification for that sync cycle. During discovery the files were not rediscovered as local items because the local folder still had the old name; the notification only reappeared on the following sync when the upload was reattempted and the server returned 507 again. Make renameErrorBlacklistPaths return true when it renames one or more InsufficientRemoteStorage entries. Callers in PropagateLocalRename and PropagateRemoteMove emit insufficientRemoteStorage immediately so the storage full message is shown in the same sync cycle as the rename. Add testStorageNotificationEmittedOnParentFolderRename to verify that the syncError signal with InsufficientRemoteStorage is emitted during the rename sync, not only on a subsequent retry. Signed-off-by: Camila Ayres <hello@camilasan.com>
…d errors. When a server side folder rename is applied locally, PropagateDirectory only wrote the renamed directory's record in the sync journal on a full Success result. If a child upload failed the directory record was never written, so the next sync could not match the directory by fileId and treated a follow up server side rename as DELETE + NEW instead of following the move. Write the directory metadata record for DOWN renames even when child jobs have errors, as long as the directory rename job itself succeeded. Signed-off-by: Camila Ayres <hello@camilasan.com>
…arents. When discovery issues an async 404 to verify whether a path was deleted on the server (a step in rename detection), children blocked from upload because of quota errors inside a queued delete directory job could fire recursiveCheckForDeletedParents and cancel the parent's REMOVE instruction before the 404 response arrived and rename detection could claim the path. Track paths that have an async 404 in flight in _pendingRenameSourcePaths and skip them in recursiveCheckForDeletedParents so rename detection is not interrupted by quota-protected children. Signed-off-by: Camila Ayres <hello@camilasan.com>
49d5c06 to
52d1af6
Compare
…rror occur in same sync. When a file blocked from upload due to a quota error has no prior blacklist entry (upload was never attempted), fall back to the parent folder's last known quota from the DB to decide whether to protect the file from local deletion when its remote parent is deleted. Signed-off-by: Camila Ayres <hello@camilasan.com>
|
Artifact containing the AppImage: nextcloud-appimage-pr-10001.zip Digest: To test this change/fix you can download the above artifact file, unzip it, and run it. Please make sure to quit your existing Nextcloud app and backup your data. |
|
| countQuery.prepare("SELECT COUNT(*) FROM blacklist " | ||
| "WHERE errorCategory = ?1 " | ||
| "AND (path = ?2 OR (path > (?2 || '/') AND path < (?2 || '0')))"); |
There was a problem hiding this comment.
please use prepared SQL statements in this method (similar to how it's done in e.g. getFileRecordsByFileId)
|
|
||
| SyncJournalErrorBlacklistRecord errorBlacklistEntry(const QString &); | ||
| [[nodiscard]] bool deleteStaleErrorBlacklistEntries(const QSet<QString> &keep); | ||
| bool renameErrorBlacklistPaths(const QString &from, const QString &to); |
There was a problem hiding this comment.
| bool renameErrorBlacklistPaths(const QString &from, const QString &to); | |
| [[nodiscard]] bool renameErrorBlacklistPaths(const QString &from, const QString &to); |
| // Update the exact folder entry and all entries whose path starts with "from/". | ||
| // Uses the same range trick as IS_PREFIX_PATH_OR_EQUAL: '/' + 1 == '0'. | ||
| SqlQuery query(_db); | ||
| query.prepare("UPDATE blacklist " | ||
| "SET path = ?2 || substr(path, length(?1) + 1) " | ||
| "WHERE path == ?1 OR (path > (?1 || '/') AND path < (?1 || '0'))"); | ||
| query.bindValue(1, from); | ||
| query.bindValue(2, to); | ||
| if (!query.exec()) { | ||
| sqlFail(QStringLiteral("renameErrorBlacklistPaths"), query); | ||
| } |
There was a problem hiding this comment.
does this need to be run if there aren't any quota entries?
| // Mark as a quota error so blacklistUpdate writes an InsufficientRemoteStorage entry. | ||
| // Without this, _httpErrorCode would be 0 and blacklistUpdate would not create an | ||
| // entry, leaving the file unprotected by checkNewDeleteConflict if the remote parent | ||
| // folder is deleted before the quota situation is resolved. | ||
| item->_httpErrorCode = 507; |
There was a problem hiding this comment.
this seems like a hack to me, we probably shouldn't fake a returned HTTP error code
looking at blacklistUpdate there is an unused bool in SyncFileItem _errorMayBeBlacklisted which could be renamed and reused to indicate that this item is excluded due to quota reasons, and then update SyncJournalErrorBlacklistRecord::createBlacklistEntry to also consider that flag for the blacklist entry
|
|
||
| // Add a new local file that will fail to upload due to remote quota (HTTP 507). | ||
| fakeFolder.localModifier().insert(QStringLiteral("A/quota_blocked.txt"), 100); | ||
| fakeFolder.serverErrorPaths().append(QStringLiteral("A/quota_blocked.txt"), 507); |
There was a problem hiding this comment.
shouldn't the tests call fakeFolder.remoteModifier().setFolderQuota("A", ...) as well to ensure the quota is caught before any upload attempt?
right now the sync in this fails tests because it receives a 507 during propagation, and not because the client determined that the to-be uploaded file is too big for the quota during discovery
| // File must remain absent on the server. | ||
| QVERIFY(!fakeFolder.currentRemoteState().find(QStringLiteral("A/quota_blocked.txt"))); |
There was a problem hiding this comment.
what happens after another sync once the synced items are deleted locally?
will the file still be present locally? is it going to get uploaded? if it's still blacklisted, how will the user be able to sync those non-uploaded files again?
| // Regression: new files in a server deleted folder must still be deleted locally | ||
| void testDeleteDirectoryWithNewFileNoQuotaError() | ||
| { | ||
| FakeFolder fakeFolder{FileInfo::A12_B12_C12_S12()}; | ||
| fakeFolder.remoteModifier().remove(QStringLiteral("A")); | ||
| fakeFolder.localModifier().insert(QStringLiteral("A/newfile.txt"), 100); | ||
|
|
||
| QVERIFY(fakeFolder.syncOnce()); | ||
|
|
||
| // New local file with no quota blacklist entry must be removed when parent is deleted on server. | ||
| QVERIFY(!fakeFolder.currentLocalState().find(QStringLiteral("A/newfile.txt"))); | ||
| QCOMPARE(fakeFolder.currentLocalState(), fakeFolder.currentRemoteState()); | ||
| } |
There was a problem hiding this comment.
just a note: this behaviour (which AFAIK had been that way since forever) appears to be inconsistent with how the non-uploaded files affected by the quota are kept now
as a user I would be surprised if only some items are kept after having them removed from the server. perhaps something to discuss later?
| // Previously synced sibling must be removed (trust the server deletion). | ||
| QVERIFY(!fakeFolder.currentLocalState().find(QStringLiteral("A/small.txt"))); |
There was a problem hiding this comment.
again, please verify the behaviour after another sync




Resolves
When a file fails to upload because the server storage quota is exceeded, the client now protects it from being accidentally deleted if the parent folder is later renamed or removed on the server. The blacklist entry is updated when the folder moves so the file stays protected across renames, and the storage full notification is re-emitted on every sync so the user always knows there are files waiting to be uploaded once quota is freed.
Two edge cases in rename detection were also fixed to ensure the client correctly follows folders through multiple successive server side renames even when they contain files that were never uploaded.
Users are notified of the error with a message on why the client is not deleting the whole content of the folder.

For example:
The
Big 2folder was moved and then deleted in the server but locally it is preserved and the client displays the error above.Uploads blocked because of quota errors also retry on every sync instead of waiting up to 24 hours.
Files that were previously synced successfully are unaffected.
Summary
How to test it
Prerequisites: build from this branch, have a Nextcloud test account where you can control the quota from the admin panel.
Test 1: file survives parent folder deletion
Test 2: file follows parent folder rename
Test 3: file retries once quota is freed
Regression check
4. Delete a folder on the server that contains a new local file with no quota error
5. Sync — that file should be deleted locally as before (quota protection must not affect ordinary files)
Checklist
AI (if applicable)