feat(backup): incremental NAS backup support for KVM#13074
Open
jmsperu wants to merge 8 commits intoapache:mainfrom
Open
feat(backup): incremental NAS backup support for KVM#13074jmsperu wants to merge 8 commits intoapache:mainfrom
jmsperu wants to merge 8 commits intoapache:mainfrom
Conversation
Adds the design document for incremental NAS backups using QEMU dirty bitmaps and libvirt's backup-begin API. Reduces daily backup storage 80-95% for large VMs. Refs: apache#12899
NASBackupChainKeys defines the keys this provider stores under the existing backup_details kv table (parent_backup_id, bitmap_name, chain_id, chain_position, type). This keeps the backups table provider-agnostic per the RFC review. nas.backup.full.every is a zone-scoped ConfigKey that controls how often a full backup is taken; the remaining backups in the cycle are incremental. Counts backups (not days), so it works for hourly, daily, and ad-hoc schedules. Default 10. Set to 1 to disable incrementals (every backup is full). Refs: apache#12899
Adds three new optional CLI flags to nasbackup.sh:
-M|--mode <full|incremental>
--bitmap-new <name> (checkpoint to create with this backup)
--bitmap-parent <name> (incremental: parent bitmap to read changes since)
--parent-path <path> (incremental: parent backup file for rebase)
Behavior:
- When -M is omitted, behavior is unchanged (legacy full-only, no checkpoint
created), so existing callers are not affected.
- With -M full + --bitmap-new, a full backup is taken AND a libvirt
checkpoint of that name is registered atomically (via backup-begin's
--checkpointxml), giving the next incremental its starting bitmap.
- With -M incremental, libvirt's <incremental> element references the
parent bitmap; only changed blocks are written. After completion,
qemu-img rebase wires the new file to its parent so the chain on the
NAS is self-describing for restore.
- Stopped VMs cannot use backup-begin; if -M incremental is requested
while VM is stopped, the script falls back to a full and emits
INCREMENTAL_FALLBACK= on stderr so the orchestrator can record it
correctly in the chain.
- The script echoes BITMAP_CREATED=<name> on success so the Java caller
can store it under backup_details (NASBackupChainKeys.BITMAP_NAME).
Works across local file, NFS-file, and LINSTOR primary storage. Ceph RBD
running-VM support is a pre-existing limitation of this script, not
affected by this change.
Refs: apache#12899
Adds the Java side of the incremental NAS backup feature:
TakeBackupCommand
+ mode, bitmapNew, bitmapParent, parentPath fields (null for legacy
callers — script preserves its existing behaviour when these are
omitted).
BackupAnswer
+ bitmapCreated (echoed by the agent on success)
+ incrementalFallback (true when an incremental was requested but the
agent had to fall back to full because the VM was stopped).
LibvirtTakeBackupCommandWrapper
- Forwards the new fields to nasbackup.sh.
- Strips the new BITMAP_CREATED= / INCREMENTAL_FALLBACK= marker lines
out of stdout before the existing numeric-suffix size parser runs,
so the script can keep the same "size as last line(s)" contract.
- Surfaces both markers on the BackupAnswer.
NASBackupProvider
- decideChain(vm) walks backup_details (chain_id, chain_position,
bitmap_name) for the latest BackedUp backup of the VM and decides:
* Stopped VM -> full (libvirt backup-begin needs running QEMU)
* No prior chain -> full (chain_position=0)
* chain_position+1 >= nas.backup.full.every -> new full
* otherwise -> incremental, parent=last bitmap
- Generates timestamp-based bitmap names ("backup-<epoch>") matching
what the script then registers as the libvirt checkpoint name.
- persistChainMetadata() writes parent_backup_id, bitmap_name,
chain_id, chain_position, type into the existing backup_details
key/value table (per the RFC review — no new columns on backups).
- Honours the agent's INCREMENTAL_FALLBACK= signal: re-records the
backup as a full and starts a fresh chain.
- createBackupObject() now takes a type argument so the BackupVO
reflects the actual decision instead of always being "FULL".
Refs: apache#12899
CloudStack rebuilds the libvirt domain XML on every VM start, which means
persistent QEMU dirty bitmaps don't survive a stop/start cycle. Rather
than hooking into the VM start lifecycle (intrusive across the
orchestration layer), this commit handles the missing bitmap *lazily* at
the next backup attempt:
nasbackup.sh
- When -M incremental is requested, the script first checks
`virsh checkpoint-list` for the parent bitmap. If absent, it
recreates the checkpoint on the running domain so libvirt accepts
the <incremental> reference. The next incremental will be larger
than usual (it captures all writes since recreate, not since the
previous incremental) but is correct; subsequent ones return to
normal size.
- On recreation, emits BITMAP_RECREATED=<name> on stdout for the
orchestrator to record.
BackupAnswer
+ bitmapRecreated field surfaced from the agent.
LibvirtTakeBackupCommandWrapper
- Strips BITMAP_RECREATED= line from stdout before size parsing.
- Sets answer.setBitmapRecreated(...).
NASBackupChainKeys
+ BITMAP_RECREATED key for backup_details.
NASBackupProvider
- When the agent reports a recreated bitmap, persists it under
backup_details and logs an info-level message so operators can
correlate larger-than-usual incrementals with VM restarts.
This satisfies the bitmap-loss-on-VM-restart concern from the RFC review
without touching VirtualMachineManager / StartCommand / agent lifecycle.
Refs: apache#12899
Two changes that together let an incremental NAS backup be restored
without manual chain assembly:
scripts/vm/hypervisor/kvm/nasbackup.sh
- qemu-img rebase now writes a backing-file path that is RELATIVE to
the new qcow2's directory (e.g. ../<parent-ts>/root.<uuid>.qcow2)
rather than the absolute path on the current mount point. NAS mount
points are ephemeral (mktemp -d), so an absolute reference would
not resolve when the backup is re-mounted at restore time. Relative
references are resolved by qemu-img against the file's own
directory, so the chain stays valid no matter where the NAS is
mounted next.
- Verifies the parent file exists on the NAS before rebasing.
LibvirtRestoreBackupCommandWrapper
- For file-based primary storage (local, NFS-file), the existing
code rsync'd the source qcow2 to the volume. That copies only the
differential blocks of an incremental, leaving a volume whose
backing-file reference points at a path the primary storage host
doesn't have. Now: detect a backing-chain via qemu-img info JSON
and flatten via 'qemu-img convert -O qcow2', which follows the
chain and produces a self-contained qcow2. Full backups continue
to use rsync (faster, no chain to flatten).
- The block-storage path (RBD/Linstor) already used qemu-img convert
via the QemuImg helper, which auto-flattens chains, so that path
needed no change.
Refs: apache#12899
Adds the delete-with-chain-repair semantics agreed in the RFC review:
scripts/vm/hypervisor/kvm/nasbackup.sh
- New '-o rebase' operation: rebases an existing on-NAS qcow2 onto
a new backing parent. Uses a SAFE rebase (no -u) so the target
absorbs blocks of the about-to-be-deleted parent before the
backing pointer is moved up to the grandparent. Writes the new
backing reference relative to the target's directory so it
survives mount-point changes.
- New CLI flags --rebase-target, --rebase-new-backing (both passed
mount-relative).
RebaseBackupCommand + LibvirtRebaseBackupCommandWrapper
- New agent command that wraps the script's rebase operation. The
provider sends one of these per child that needs re-pointing.
NASBackupProvider.deleteBackup
- Now plans the chain repair before touching files via
computeChainRepair():
* No chain metadata -> single-file delete (legacy behaviour)
* Tail incremental -> single delete, no rebase
* Middle incremental -> rebase immediate child onto our
parent, then delete; shift
chain_position of all later
descendants by -1
* Full with descendants -> refuse unless forced=true; with
forced=true delete full + every
descendant newest-first
- Updates parent_backup_id, chain_position metadata in
backup_details after each rebase so the model in the DB matches
the on-disk chain.
This implements the cascade-delete behaviour requested in @abh1sar's
review point apache#7.
Refs: apache#12899
Adds five new test cases to test_backup_recovery_nas.py covering the
end-to-end behaviour of the incremental NAS backup feature:
* test_incremental_chain_cadence
- Sets nas.backup.full.every=3, takes 5 backups, verifies the
type pattern is FULL, INC, INC, FULL, INC.
* test_restore_from_incremental
- FULL + 2 INCs, each with a marker file. Restores from the
latest INC and verifies all three markers are present
(i.e. qemu-img convert flattened the chain correctly).
* test_delete_middle_incremental_repairs_chain
- Builds FULL, INC1, INC2; deletes INC1 (no force needed);
restores from the surviving INC2 and verifies that markers
from FULL, INC1 (which was deleted), and INC2 are all present
— proving the rebase merged INC1's blocks into INC2.
* test_refuse_delete_full_with_children
- Verifies plain delete of a FULL that has children fails, and
delete with forced=true succeeds and removes the whole chain.
* test_stopped_vm_falls_back_to_full
- Sets cadence to 2, takes one backup (FULL), stops the VM,
triggers another (cadence would say INC). Verifies the second
backup is recorded as FULL because the agent fell back when
backup-begin couldn't run on a stopped VM.
All tests restore nas.backup.full.every to 10 in finally blocks.
Refs: apache#12899
Codecov Report✅ All modified and coverable lines are covered by tests.
Additional details and impacted files@@ Coverage Diff @@
## main #13074 +/- ##
=============================================
- Coverage 18.02% 3.52% -14.51%
=============================================
Files 6029 464 -5565
Lines 542184 40137 -502047
Branches 66451 7555 -58896
=============================================
- Hits 97740 1415 -96325
+ Misses 433428 38534 -394894
+ Partials 11016 188 -10828
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Implements incremental backup support for the NAS backup provider on KVM, using QEMU dirty bitmaps and libvirt's
backup-beginAPI. RFC: #12899.For large VMs this reduces daily backup storage 80–95% and shortens backup windows from hours to minutes (e.g. a 500 GB VM with moderate writes goes from ~500 GB/day to ~5–15 GB/day after the initial full backup).
What's in the PR
f2a9202d74docs/rfcs/incremental-nas-backup.md1981469099NASBackupChainKeysconstants + zone-scopednas.backup.full.everyConfigKey (default 10)fbb916b254nasbackup.shmode-aware: full+checkpoint or incremental+rebase viabackup-begin1f2aebca36backup_details43e2f7504a39303fbf88qemu-img convertflatten for file-based primaryb8d069e127RebaseBackupCommand, chain repair for delete-middle, refuse-delete-full-with-children49edc7f22ctest/integration/smoke/test_backup_recovery_nas.pyFull diff: 11 files, +1617 / −30.
Review feedback addressed (all from #12899 thread)
backupsbackup_detailskv table viaNASBackupChainKeysnas.backup.full.interval(days) doesn't fit hourly/ad-hocnas.backup.full.every(default 10)backup-beginfor full backups toobackup-begin; full omits<incremental>backup-<epoch>(System.currentTimeMillis()/1000)block-dirty-bitmap-add--checkpointxml; manual bitmap commands removedqemu-img rebaseafter each incrementalnasbackup.sh, with relative backing path so chain survives mount-point churnINCREMENTAL_FALLBACK=if cadence asked for incforced=truevirsh checkpoint-list, recreates if missing, emitsBITMAP_RECREATED=test_backup_recovery_nas.pyBackwards compatibility
-M/--bitmap-*flags onnasbackup.share optional. Without them, the script preserves the legacy full-only behaviour exactly (no checkpoint creation, same XML).TakeBackupCommandnew fields default to null;LibvirtTakeBackupCommandWrapperonly emits the new flags when set, so a 4.22 management server talking to a 4.23 agent still works.chain_idinbackup_details) are treated as standalone fulls by the cascade-delete logic — no migration needed.Test plan
NASBackupProviderTestshould still pass)test_backup_recovery_nas.py(5 new cases — requirerequired_hardware="true")nas.backup.full.every=3, verify chain pattern FULL, INC, INC, FULL, INCdiffthe restored disk against the live disk at backup timeBITMAP_RECREATED=shows up in agent logsRefs