From eab1c5d1f64d77131928469081729d6370841075 Mon Sep 17 00:00:00 2001 From: Ceri Davies Date: Tue, 12 Jan 2021 18:57:32 +0100 Subject: books/: Address more instances of sentences beginning with 'Because...' As was the case with the previous commit, the intention is to avoid sentence fragments as well as sentences that can be mistaken for them, since the handbook isn't written in a style that makes use of subordinate conjunctions. While touching the relevant files, I also fixed a few issues pointed out by PauAmma, and reflowed a sentence as a result. PR: 252519 Submitted by: ceri@ Reviewed by: PauAmma --- en_US.ISO8859-1/books/handbook/zfs/chapter.xml | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) (limited to 'en_US.ISO8859-1/books/handbook/zfs') diff --git a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml index c6d89f091a..ef1064a438 100644 --- a/en_US.ISO8859-1/books/handbook/zfs/chapter.xml +++ b/en_US.ISO8859-1/books/handbook/zfs/chapter.xml @@ -1282,7 +1282,7 @@ errors: No known data errors 2 TB drive. The usable space is 1 TB. When the 1 TB drive is replaced with another 2 TB drive, the resilvering process copies the existing data onto the new - drive. Because + drive. As both of the devices now have 2 TB capacity, the mirror's available space can be grown to 2 TB. @@ -4045,7 +4045,7 @@ vfs.zfs.vdev.cache.size="5M" Clones can be promoted, reversing this dependency and making the clone the parent and the previous parent the child. This operation requires no - additional space. Because the amount of space used by + additional space. Since the amount of space used by the parent and child is reversed, existing quotas and reservations might be affected. @@ -4201,7 +4201,7 @@ vfs.zfs.vdev.cache.size="5M" blocks will be checked byte-for-byte to ensure it is actually identical. If the data is not identical, the hash collision will be noted and the two blocks will be - stored separately. Because DDT must + stored separately. As DDT must store the hash of each unique block, it consumes a very large amount of memory. A general rule of thumb is 5-6 GB of ram per 1 TB of deduplicated data). -- cgit v1.2.3