-
-
Notifications
You must be signed in to change notification settings - Fork 836
Open
Description
borg2.0.0b14 seems to very slow during the initial chunk index processing:
2025-02-02 15:08:18.631320500 borgstore.backends.errors.ObjectNotFound: cache/chunks
2025-02-02 15:08:18.631320500
2025-02-02 15:08:18.648099500 cached chunk indexes: ['39267a11a8ea6a4c', 'da243a8927c10035']
2025-02-02 15:08:18.648261500 trying to load cache/chunks.39267a11a8ea6a4c from the repo...
2025-02-02 15:08:18.668600500 cache/chunks.39267a11a8ea6a4c is valid.
2025-02-02 15:08:18.669315500 cached chunk index 39267a11a8ea6a4c gets merged...
2025-02-02 15:08:18.671682500 trying to load cache/chunks.da243a8927c10035 from the repo...
2025-02-02 15:08:21.239523500 cache/chunks.da243a8927c10035 is valid.
2025-02-02 15:08:21.954073500 cached chunk index da243a8927c10035 gets merged...
2025-02-02 15:08:51.544026500 caching 1476533 new chunks.
2025-02-02 15:08:51.610601500 cached chunk indexes: ['39267a11a8ea6a4c', 'da243a8927c10035']
2025-02-02 15:08:51.610614500 caching chunks index as cache/chunks.7e288068a1b796fa in repository...
2025-02-02 15:09:01.386515500 cached chunk indexes deleted: {'39267a11a8ea6a4c', 'da243a8927c10035'}
the repo used here has 120 archives with ~1.4M objects, and the initial chunk index merge takes 30 seconds. on a large repo with 430 archives and over 2M objects it takes over a minute. during this time the progress bar of a non---debug run is stuck at zero and no indication what's happening, but one core pegged at 100%. total backup time of an unchanged source more than doubled compared to borg1 (but the .cache space it needs is two orders of magnitude smaller, which is greatly appreciated!)
is this a tuning problem that we haven't found the documentation for, or is the borg2 setup just legitimately that high?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels