path: root/en_US.ISO8859-1/books/handbook/zfs
diff options
authorWarren Block <wblock@FreeBSD.org>2016-06-03 18:20:29 +0000
committerWarren Block <wblock@FreeBSD.org>2016-06-03 18:20:29 +0000
commit0dd71013f34402680145db2009a862da5281a0e4 (patch)
tree9b544a52e1f7e3bba4bd7cc6ae998001f6fb1a8b /en_US.ISO8859-1/books/handbook/zfs
parentbb71c015e0eeb33b001ded93db9ec64a5fad8f7e (diff)
Correct misusage of "zpool".
PR: 206940 Submitted by: Shawn Debnath <sd@beastie.io> Differential Revision: https://reviews.freebsd.org/D6163
Notes: svn path=/head/; revision=48889
Diffstat (limited to 'en_US.ISO8859-1/books/handbook/zfs')
1 files changed, 6 insertions, 6 deletions
diff --git a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
index 8fcc5a4d67..c553886946 100644
--- a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
+++ b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml
@@ -2265,7 +2265,7 @@ passwd vi.recover
cp: /var/tmp/.zfs/snapshot/after_cp/rc.conf: Read-only file system</screen>
<para>The error reminds the user that snapshots are read-only
- and can not be changed after creation. No files can be
+ and cannot be changed after creation. Files cannot be
copied into or removed from snapshot directories because
that would change the state of the dataset they
@@ -2315,7 +2315,7 @@ camino/home/joe@backup 0K - 87K -</screen>
<para>A typical use for clones is to experiment with a specific
dataset while keeping the snapshot around to fall back to in
- case something goes wrong. Since snapshots can not be
+ case something goes wrong. Since snapshots cannot be
changed, a read/write clone of a snapshot is created. After
the desired result is achieved in the clone, the clone can be
promoted to a dataset and the old file system removed. This
@@ -3461,7 +3461,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
combining the traditionally separate roles,
<acronym>ZFS</acronym> is able to overcome previous limitations
that prevented <acronym>RAID</acronym> groups being able to
- grow. Each top level device in a zpool is called a
+ grow. Each top level device in a pool is called a
<emphasis>vdev</emphasis>, which can be a simple disk or a
<acronym>RAID</acronym> transformation such as a mirror or
<acronym>RAID-Z</acronym> array. <acronym>ZFS</acronym> file
@@ -3476,7 +3476,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
<tgroup cols="2">
<tbody valign="top">
- <entry xml:id="zfs-term-zpool">zpool</entry>
+ <entry xml:id="zfs-term-pool">pool</entry>
<entry>A storage <emphasis>pool</emphasis> is the most
basic building block of <acronym>ZFS</acronym>. A pool
@@ -3534,7 +3534,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
pools can be backed by regular files, this is
especially useful for testing and experimentation.
Use the full path to the file as the device path
- in the zpool create command. All vdevs must be
+ in <command>zpool create</command>. All vdevs must be
at least 128&nbsp;MB in size.</para>
@@ -3641,7 +3641,7 @@ vfs.zfs.vdev.cache.size="5M"</programlisting>
- - Adding a cache vdev to a zpool will add the
+ - Adding a cache vdev to a pool will add the
storage of the cache to the <link
Cache devices cannot be mirrored. Since a cache