aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorSergio Carlavilla Delgado <carlavilla@FreeBSD.org>2021-03-03 20:55:40 +0000
committerSergio Carlavilla Delgado <carlavilla@FreeBSD.org>2021-03-03 20:55:40 +0000
commit3c2a5e96f9abb69d1739cf08891f3a9f6c629a8e (patch)
tree4af9681416490a41e29f01345d7e4a9716c799e1
parent8ee5df01793364ace57b68352bcb6eecc3803e68 (diff)
downloaddoc-3c2a5e96f9abb69d1739cf08891f3a9f6c629a8e.tar.gz
doc-3c2a5e96f9abb69d1739cf08891f3a9f6c629a8e.zip
Improve wording in ZFS chapter
PR: 253075 Patch by: panden(at)gmail.com
-rw-r--r--documentation/content/en/books/handbook/zfs/_index.adoc6
-rw-r--r--documentation/content/pl/books/handbook/zfs/_index.adoc6
2 files changed, 6 insertions, 6 deletions
diff --git a/documentation/content/en/books/handbook/zfs/_index.adoc b/documentation/content/en/books/handbook/zfs/_index.adoc
index e5a5806e92..d6cc133db1 100644
--- a/documentation/content/en/books/handbook/zfs/_index.adoc
+++ b/documentation/content/en/books/handbook/zfs/_index.adoc
@@ -514,7 +514,7 @@ A pool that is no longer needed can be destroyed so that the disks can be reused
There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`. Only some <<zfs-term-vdev,vdev types>> allow disks to be added to the vdev after creation.
-A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second, `gpart backup` and `gpart restore` can be used to make this process easier.
+A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second. `gpart backup` and `gpart restore` can be used to make this process easier.
Upgrade the single disk (stripe) vdev _ada0p3_ to a mirror by attaching _ada1p3_:
@@ -882,7 +882,7 @@ NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALT
healer 960M 92.5K 960M - - 0% 0% 1.00x ONLINE -
....
-Some important data that to be protected from data errors using the self-healing feature is copied to the pool. A checksum of the pool is created for later comparison.
+Some important data that have to be protected from data errors using the self-healing feature are copied to the pool. A checksum of the pool is created for later comparison.
[source,bash]
....
@@ -2343,7 +2343,7 @@ A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create som
|A ZFS dataset is most often used as a file system. Like most other file systems, a ZFS file system is mounted somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata.
|[[zfs-term-volume]]Volume
-|In additional to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
+|In addition to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
|[[zfs-term-snapshot]]Snapshot
|The <<zfs-term-cow,copy-on-write>> (COW) design of ZFS allows for nearly instantaneous, consistent snapshots with arbitrary names. After taking a snapshot of a dataset, or a recursive snapshot of a parent dataset that will include all child datasets, new data is written to new blocks, but the old blocks are not reclaimed as free space. The snapshot contains the original version of the file system, and the live file system contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data. The apparent size of the snapshot will grow as the blocks are no longer used in the live file system, but only in the snapshot. These snapshots can be mounted read only to allow for the recovery of previous versions of files. It is also possible to <<zfs-zfs-snapshot,rollback>> a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the pool has a reference counter which keeps track of how many snapshots, clones, datasets, or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented. When a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a <<zfs-zfs-snapshot,hold>>. When a snapshot is held, any attempt to destroy it will return an `EBUSY` error. Each snapshot can have multiple holds, each with a unique name. The <<zfs-zfs-snapshot,release>> command removes the hold so the snapshot can deleted. Snapshots can be taken on volumes, but they can only be cloned or rolled back, not mounted independently.
diff --git a/documentation/content/pl/books/handbook/zfs/_index.adoc b/documentation/content/pl/books/handbook/zfs/_index.adoc
index e9c9e67f94..4a661cdb5e 100644
--- a/documentation/content/pl/books/handbook/zfs/_index.adoc
+++ b/documentation/content/pl/books/handbook/zfs/_index.adoc
@@ -517,7 +517,7 @@ A pool that is no longer needed can be destroyed so that the disks can be reused
There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with `zpool attach`, or adding vdevs to the pool with `zpool add`. Only some <<zfs-term-vdev,vdev types>> allow disks to be added to the vdev after creation.
-A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second, `gpart backup` and `gpart restore` can be used to make this process easier.
+A pool created with a single disk lacks redundancy. Corruption can be detected but not repaired, because there is no other copy of the data. The <<zfs-term-copies,copies>> property may be able to recover from a small failure such as a bad sector, but does not provide the same level of protection as mirroring or RAID-Z. Starting with a pool consisting of a single disk vdev, `zpool attach` can be used to add an additional disk to the vdev, creating a mirror. `zpool attach` can also be used to add additional disks to a mirror group, increasing redundancy and read performance. If the disks being used for the pool are partitioned, replicate the layout of the first disk on to the second. `gpart backup` and `gpart restore` can be used to make this process easier.
Upgrade the single disk (stripe) vdev _ada0p3_ to a mirror by attaching _ada1p3_:
@@ -885,7 +885,7 @@ NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALT
healer 960M 92.5K 960M - - 0% 0% 1.00x ONLINE -
....
-Some important data that to be protected from data errors using the self-healing feature is copied to the pool. A checksum of the pool is created for later comparison.
+Some important data that have to be protected from data errors using the self-healing feature are copied to the pool. A checksum of the pool is created for later comparison.
[source,bash]
....
@@ -2333,7 +2333,7 @@ A configuration of two RAID-Z2 vdevs consisting of 8 disks each would create som
|A ZFS dataset is most often used as a file system. Like most other file systems, a ZFS file system is mounted somewhere in the systems directory hierarchy and contains files and directories of its own with permissions, flags, and other metadata.
|[[zfs-term-volume]]Volume
-|In additional to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
+|In addition to regular file system datasets, ZFS can also create volumes, which are block devices. Volumes have many of the same features, including copy-on-write, snapshots, clones, and checksumming. Volumes can be useful for running other file system formats on top of ZFS, such as UFS virtualization, or exporting iSCSI extents.
|[[zfs-term-snapshot]]Snapshot
|The <<zfs-term-cow,copy-on-write>> (COW) design of ZFS allows for nearly instantaneous, consistent snapshots with arbitrary names. After taking a snapshot of a dataset, or a recursive snapshot of a parent dataset that will include all child datasets, new data is written to new blocks, but the old blocks are not reclaimed as free space. The snapshot contains the original version of the file system, and the live file system contains any changes made since the snapshot was taken. No additional space is used. As new data is written to the live file system, new blocks are allocated to store this data. The apparent size of the snapshot will grow as the blocks are no longer used in the live file system, but only in the snapshot. These snapshots can be mounted read only to allow for the recovery of previous versions of files. It is also possible to <<zfs-zfs-snapshot,rollback>> a live file system to a specific snapshot, undoing any changes that took place after the snapshot was taken. Each block in the pool has a reference counter which keeps track of how many snapshots, clones, datasets, or volumes make use of that block. As files and snapshots are deleted, the reference count is decremented. When a block is no longer referenced, it is reclaimed as free space. Snapshots can also be marked with a <<zfs-zfs-snapshot,hold>>. When a snapshot is held, any attempt to destroy it will return an `EBUSY` error. Each snapshot can have multiple holds, each with a unique name. The <<zfs-zfs-snapshot,release>> command removes the hold so the snapshot can deleted. Snapshots can be taken on volumes, but they can only be cloned or rolled back, not mounted independently.