Skip to content

Commit e61a0da

Browse files
silopoliskdave
authored andcommitted
btrfs-progs: docs: subvolume intro editing
* fix repetition * wording and punctuation in 'Nested subvolumes' * wording and punctuation in 'system root layouts' * wording and punctuation in 'Mount options' * wording in 'Inode numbers' * wording and punctuation in 'Performance'
1 parent 5fba6df commit e61a0da

File tree

1 file changed

+21
-22
lines changed

1 file changed

+21
-22
lines changed

Documentation/ch-subvolume-intro.rst

Lines changed: 21 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ from read-only to read-write will break the assumptions and may lead to
6161
unexpected changes in the resulting incremental stream.
6262

6363
A snapshot that was created by send/receive will be read-only, with different
64-
last change generation, read-only and with set *received_uuid* which identifies
64+
last change generation, and with set *received_uuid* which identifies
6565
the subvolume on the filesystem that produced the stream. The use case relies
6666
on matching data on both sides. Changing the subvolume to read-write after it
6767
has been received requires to reset the *received_uuid*. As this is a notable
@@ -85,11 +85,10 @@ organize them, whether to have a flat layout (all subvolumes are direct
8585
descendants of the toplevel one), or nested.
8686

8787
What should be mentioned early is that a snapshotting is not recursive, so a
88-
subvolume or a snapshot is effectively a barrier and no files in the nested
89-
appear in the snapshot. Instead there's a stub subvolume (also sometimes called
90-
*empty subvolume* with the same name as original subvolume, with inode number
91-
2). This can be used intentionally but could be confusing in case of nested
92-
layouts.
88+
subvolume or a snapshot is effectively a barrier and no files in the nested subvolumes
89+
appear in the snapshot. Instead, there's a stub subvolume, also sometimes called
90+
*empty subvolume*, with the same name as original subvolume and with inode number 2.
91+
This can be used intentionally but could be confusing in case of nested layouts.
9392

9493
.. code-block:: bash
9594
@@ -124,14 +123,14 @@ log files would get rolled back too, or any data that are stored on the root
124123
filesystem but are not meant to be rolled back either (database files, VM
125124
images, ...).
126125

127-
Here we could utilize the snapshotting barrier mentioned above, each directory
128-
that stores data to be preserved across rollbacks is it's own subvolume. This
129-
could be e.g. :file:`/var`. Further more-fine grained partitioning could be done, e.g.
126+
Here we could utilize the snapshotting barrier mentioned above, making each directory
127+
that stores data to be preserved across rollbacks its own subvolume. This
128+
could be e.g. :file:`/var`. Further more fine-grained partitioning could be done, e.g.
130129
adding separate subvolumes for :file:`/var/log`, :file:`/var/cache` etc.
131130

132-
That there are separate subvolumes requires separate actions to take the
133-
snapshots (here it gets disconnected from the system root snapshots). This needs
134-
to be taken care of by system tools, installers together with selection of which
131+
The fact that there are separate subvolumes requires separate actions to take the
132+
snapshots (here, it gets disconnected from the system root snapshots). This needs
133+
to be taken care of by system tools, installers, together with selection of which
135134
directories are highly recommended to be separate subvolumes.
136135

137136
Mount options
@@ -142,16 +141,16 @@ specific, handled by the filesystem. The following list shows which are
142141
applicable to individual subvolume mounts, while there are more options that
143142
always affect the whole filesystem:
144143

145-
- generic: noatime/relatime/..., nodev, nosuid, ro, rw, dirsync
146-
- fs-specific: compress, autodefrag, nodatacow, nodatasum
144+
- Generic: noatime/relatime/..., nodev, nosuid, ro, rw, dirsync
145+
- Filesystem-specific: compress, autodefrag, nodatacow, nodatasum
147146

148-
An example of whole filesystem options is e.g. *space_cache*, *rescue*, *device*,
147+
Examples of whole filesystem options are e.g. *space_cache*, *rescue*, *device*,
149148
*skip_balance*, etc. The exceptional options are *subvol* and *subvolid* that
150149
are actually used for mounting a given subvolume and can be specified only once
151150
for the mount.
152151

153-
Subvolumes belong to a single filesystem and as implemented now all share the
154-
same specific mount options, changes done by remount have immediate effect. This
152+
Subvolumes belong to a single filesystem and, as implemented now, all share the
153+
same specific mount options. Also, changes done by remount have immediate effect. This
155154
may change in the future.
156155

157156
Mounting a read-write snapshot as read-only is possible and will not change the
@@ -189,19 +188,19 @@ original inode numbers.
189188

190189
.. note::
191190
Inode number is not a filesystem-wide unique identifier, some applications
192-
assume that. Please use pair *subvolumeid:inodenumber* for that purpose.
191+
assume that. Please use the *subvolumeid:inodenumber* pair for that purpose.
193192
The subvolume id can be read by :ref:`btrfs inspect-internal rootid<man-inspect-rootid>`
194193
or by the ioctl :ref:`BTRFS_IOC_INO_LOOKUP`.
195194

196195
Performance
197196
-----------
198197

199-
Subvolume creation needs to flush dirty data that belong to the subvolume, this
200-
step may take some time, otherwise once there's nothing else to do, the snapshot
201-
is instant and in the metadata it only creates a new tree root copy.
198+
Subvolume creation needs to flush dirty data that belong to the subvolume and this
199+
step may take some time. Otherwise, once there's nothing else to do, the snapshot
200+
is instantaneous and only creates a new tree root copy in the metadata.
202201

203202
Snapshot deletion has two phases: first its directory is deleted and the
204-
subvolume is added to a list, then the list is processed one by one and the
203+
subvolume is added to a queuing list, then the list is processed one by one and the
205204
data related to the subvolume get deleted. This is usually called *cleaning* and
206205
can take some time depending on the amount of shared blocks (can be a lot of
207206
metadata updates), and the number of currently queued deleted subvolumes.

0 commit comments

Comments
 (0)