aboutsummaryrefslogtreecommitdiff
path: root/el_GR.ISO8859-7/books/handbook/filesystems/chapter.sgml
diff options
context:
space:
mode:
Diffstat (limited to 'el_GR.ISO8859-7/books/handbook/filesystems/chapter.sgml')
-rw-r--r--el_GR.ISO8859-7/books/handbook/filesystems/chapter.sgml630
1 files changed, 630 insertions, 0 deletions
diff --git a/el_GR.ISO8859-7/books/handbook/filesystems/chapter.sgml b/el_GR.ISO8859-7/books/handbook/filesystems/chapter.sgml
new file mode 100644
index 0000000000..18d6e92a6d
--- /dev/null
+++ b/el_GR.ISO8859-7/books/handbook/filesystems/chapter.sgml
@@ -0,0 +1,630 @@
+<!--
+
+ Το Εγχειρίδιο του FreeBSD: Συστήματα Αρχείων
+
+ The FreeBSD Greek Documentation Project
+
+ $FreeBSD$
+
+ %SOURCE% en_US.ISO8859-1/books/handbook/filesystems/chapter.sgml
+ %SRCID% 1.3
+
+-->
+
+<chapter id="filesystems">
+ <chapterinfo>
+ <authorgroup>
+ <author>
+ <firstname>Tom</firstname>
+ <surname>Rhodes</surname>
+ <contrib>Γράφηκε από τον </contrib>
+ </author>
+ </authorgroup>
+ </chapterinfo>
+
+ <title>Υποστήριξη Συστημάτων Αρχείων</title>
+
+ <sect1 id="filesystems-synopsis">
+ <title>Σύνοψη</title>
+
+ <indexterm><primary>File Systems</primary></indexterm>
+ <indexterm>
+ <primary>File Systems Support</primary>
+ <see>File Systems</see>
+ </indexterm>
+
+ <para>Τα συστήματα αρχείων αποτελούν αναπόσπαστο τμήμα κάθε λειτουργικού
+ συστήματος. Επιτρέπουν στους χρήστες να δημιουργούν και να αποθηκεύουν
+ αρχεία, παρέχουν πρόσβαση σε δεδομένα, και φυσικά αξιοποιούν τους
+ σκληρούς δίσκους. Διαφορετικά λειτουργικά συστήματα χρησιμοποιούν
+ συνήθως διαφορετικά εγγενή συστήματα αρχείων. Το σύστημα αρχείων του
+ &os; είναι το Fast File System ή <acronym>FFS</acronym>, το
+ οποίο προήλθε από το αρχικό σύστημα αρχείων του Unix&trade;, γνωστό
+ επίσης και ως <acronym>UFS</acronym>. Αυτό είναι και το εγγενές
+ σύστημα αρχείων του &os;, το οποίο χρησιμοποιείται στους σκληρούς
+ δίσκους και προσφέρει πρόσβαση στα δεδομένα.</para>
+
+ <para>Το &os; προσφέρει επίσης πληθώρα διαφορετικών συστημάτων αρχείων,
+ ώστε να παρέχει τοπική πρόσβαση σε δεδομένα που έχουν δημιουργηθεί από
+ άλλα λειτουργικά συστήματα, π.χ.&nbsp;δεδομένα που βρίσκονται σε τοπικά
+ <acronym>USB</acronym> αποθηκευτικά μέσα, οδηγούς flash, και σκληρούς
+ δίσκους. Υπάρχει επίσης υποστήριξη για άλλα, μη-εγγενή συστήματα
+ αρχείων, όπως το το Extended File System (<acronym>EXT</acronym>) του
+ &linux; καθώς και το σύστημα Z File System (<acronym>ZFS</acronym>) της
+ &sun;.</para>
+
+ <para>Το &os; παρέχει διαφορετικό επίπεδο υποστήριξης για κάθε σύστημα
+ αρχείων. Για ορισμένα θα χρειαστεί να φορτωθεί κάποιο άρθρωμα στον
+ πυρήνα, ενώ για άλλα θα πρέπει να εγκατασταθούν κάποια εργαλεία.
+ Το κεφάλαιο αυτό έχει σχεδιαστεί να βοηθήσει τους χρήστες του &os; να
+ αποκτήσουν πρόσβαση σε άλλα συστήματα αρχείων στο σύστημα τους,
+ ξεκινώντας από το Ζ File System της &sun;.</para>
+
+ <para>Αφού διαβάσετε αυτό το κεφάλαιο, θα γνωρίζετε:</para>
+
+ <itemizedlist>
+ <listitem>
+ <para>Τη διαφορά μεταξύ των εγγενών και των υποστηριζόμενων
+ συστημάτων αρχείων.</para>
+ </listitem>
+
+ <listitem>
+ <para>Ποια συστήματα αρχείων υποστηρίζονται από το &os;.</para>
+ </listitem>
+
+ <listitem>
+ <para>Πως να ενεργοποιήσετε, να ρυθμίσετε, να αποκτήσετε πρόσβαση και
+ να χρησιμοποιήσετε μη-εγγενή συστήματα αρχείων.</para>
+ </listitem>
+ </itemizedlist>
+
+ <para>Πριν διαβάσετε αυτό το κεφάλαιο, θα πρέπει:</para>
+
+ <itemizedlist>
+ <listitem>
+ <para>Να κατανοείτε βασικές έννοιες του &unix; και του &os;
+ (<xref linkend="basics">).</para>
+ </listitem>
+
+ <listitem>
+ <para>Να είστε εξοικειωμένος με τις βασικές διαδικασίες ρύθμισης και
+ εγκατάστασης προσαρμοσμένου πυρήνα
+ (<xref linkend="kernelconfig">).</para>
+ </listitem>
+
+ <listitem>
+ <para>Να αισθάνεστε άνετα με την εγκατάσταση εφαρμογών τρίτου
+ κατασκευαστή στο &os; (<xref linkend="ports">).</para>
+ </listitem>
+
+ <listitem>
+ <para>Να είστε εξοικειωμένος με τους δίσκους, τα μέσα
+ αποθήκευσης, και τα αντίστοιχα ονόματα συσκευών στο
+ &os; (<xref linkend="disks">).</para>
+ </listitem>
+ </itemizedlist>
+
+ <!--
+ Temporary warning to avoid listing experimental versions
+ and production versions of FreeBSD with this technology.
+ -->
+ <warning>
+ <para>Τη δεδομένη στιγμή, το <acronym>ZFS</acronym> θεωρείται
+ δοκιμαστική δυνατότητα. Μερικές επιλογές μπορεί να υπολείπονται
+ λειτουργικότητας, και άλλες μπορεί να μη λειτουργούν καθόλου. Με
+ την πάροδο του χρόνου, η δυνατότητα αυτή θα θεωρηθεί έτοιμη για χρήση
+ σε περιβάλλοντα παραγωγής, και η παρούσα τεκμηρίωση θα ανανεωθεί ώστε
+ να αντιπροσωπεύει αυτή την κατάσταση.</para>
+ </warning>
+ </sect1>
+
+ <sect1 id="filesystems-zfs">
+ <title>The Z File System</title>
+
+ <para>The Z&nbsp;file system, developed by &sun;, is a new
+ technology designed to use a pooled storage method. This means
+ that space is only used as it is needed for data storage. It
+ has also been designed for maximum data integrity, supporting
+ data snapshots, multiple copies, and data checksums. A new
+ data replication model, known as <acronym>RAID</acronym>-Z has
+ been added. The <acronym>RAID</acronym>-Z model is similar
+ to <acronym>RAID</acronym>5 but is designed to prevent data
+ write corruption.</para>
+
+ <sect2>
+ <title>ZFS Tuning</title>
+
+ <para>The <acronym>ZFS</acronym> subsystem utilizes much of
+ the system resources, so some tuning may be required to provide
+ maximum efficiency during every-day use. As an experimental
+ feature in &os; this may change in the near future; however,
+ at this time, the following steps are recommended.</para>
+
+ <sect3>
+ <title>Memory</title>
+
+ <para>The total system memory should be at least one gigabyte,
+ with two gigabytes or more recommended. In all of the
+ examples here, the system has one gigabyte of memory with
+ several other tuning mechanisms in place.</para>
+
+ <para>Some people have had luck using fewer than one gigabyte
+ of memory, but with such a limited amount of physical memory,
+ when the system is under heavy load, it is very plausible
+ that &os; will panic due to memory exhaustion.</para>
+ </sect3>
+
+ <sect3>
+ <title>Kernel Configuration</title>
+
+ <para>It is recommended that unused drivers and options
+ be removed from the kernel configuration file. Since most
+ devices are available as modules, they may simply be loaded
+ using the <filename>/boot/loader.conf</filename> file.</para>
+
+ <para>Users of the i386 architecture should add the following
+ option to their kernel configuration file, rebuild their
+ kernel, and reboot:</para>
+
+ <programlisting>options KVA_PAGES=512</programlisting>
+
+ <para>This option will expand the kernel address space, thus
+ allowing the <varname>vm.kvm_size</varname> tunable to be
+ pushed beyond the currently imposed limit of 1&nbsp;GB
+ (2&nbsp;GB for <acronym>PAE</acronym>). To find the most
+ suitable value for this option, divide the desired address
+ space in megabytes by four (4). In this case, it is
+ <literal>512</literal> for 2&nbsp;GB.</para>
+ </sect3>
+
+ <sect3>
+ <title>Loader Tunables</title>
+
+ <para>The <devicename>kmem</devicename> address space should be
+ increased on all &os; architectures. On the test system with
+ one gigabyte of physical memory, success was achieved with the
+ following options which should be placed in
+ the <filename>/boot/loader.conf</filename> file and the system
+ restarted:</para>
+
+ <programlisting>vm.kmem_slze="330M"
+vm.kmem_size_max="330M"
+vfs.zfs.arc_max="40M"
+vfs.zfs.vdev.cache.size="5M"</programlisting>
+
+ <para>For a more detailed list of recommendations for ZFS-related
+ tuning, see
+ <ulink url="http://wiki.freebsd.org/ZFSTuningGuide"></ulink>.</para>
+ </sect3>
+ </sect2>
+
+ <sect2>
+ <title>Using <acronym>ZFS</acronym></title>
+
+ <para>There is a start up mechanism that allows &os; to
+ mount <acronym>ZFS</acronym> pools during system
+ initialization. To set it, issue the following
+ commands:</para>
+
+ <screen>&prompt.root; <userinput>echo 'zfs_enable="YES"' &gt;&gt; /etc/rc.conf</userinput>
+&prompt.root; <userinput>/etc/rc.d/zfs start</userinput></screen>
+
+ <para>The remainder of this document assumes two
+ <acronym>SCSI</acronym> disks are available, and their device names
+ are <devicename><replaceable>da0</replaceable></devicename>
+ and <devicename><replaceable>da1</replaceable></devicename>
+ respectively. Users of <acronym>IDE</acronym> hardware may
+ use the <devicename><replaceable>ad</replaceable></devicename>
+ devices in place of <acronym>SCSI</acronym> hardware.</para>
+
+ <sect3>
+ <title>Single Disk Pool</title>
+
+ <para>To create a <acronym>ZFS</acronym> over a single disk
+ device, use the <command>zpool</command> command:</para>
+
+ <screen>&prompt.root; <userinput>zpool create example /dev/da0</userinput></screen>
+
+ <para>To view the new pool, review the output of the
+ <command>df</command>:</para>
+
+ <screen>&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235230 1628718 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032846 48737598 2% /usr
+example 17547136 0 17547136 0% /example</screen>
+
+ <para>This output clearly shows the <literal>example</literal>
+ pool has not only been created but
+ <emphasis>mounted</emphasis> as well. It is also accessible
+ just like a normal file system, files may be created on it
+ and users are able to browse it as in the
+ following example:</para>
+
+ <screen>&prompt.root <userinput>cd /example</userinput>
+&prompt.root; <userinput>ls</userinput>
+&prompt.root; <userinput>touch testfile</userinput>
+&prompt.root; <userinput>ls -al</userinput>
+total 4
+drwxr-xr-x 2 root wheel 3 Aug 29 23:15 .
+drwxr-xr-x 21 root wheel 512 Aug 29 23:12 ..
+-rw-r--r-- 1 root wheel 0 Aug 29 23:15 testfile</screen>
+
+ <para>Unfortunately this pool is not taking advantage of
+ any <acronym>ZFS</acronym> features. Create a file system
+ on this pool, and enable compression on it:</para>
+
+ <screen>&prompt.root; <userinput>zfs create example/compressed</userinput>
+&prompt.root; <userinput>zfs set compression=gzip example/compressed</userinput></screen>
+
+ <para>The <literal>example/compressed</literal> is now a
+ <acronym>ZFS</acronym> compressed file system. Try copying
+ some large files to it by copying them to
+ <filename class="directory">/example/compressed</filename>.</para>
+
+ <para>The compression may now be disabled with:</para>
+
+ <screen>&prompt.root; <userinput>zfs set compression=off example/compressed</userinput></screen>
+
+ <para>To unmount the file system, issue the following command
+ and then verify by using the <command>df</command>
+ utility:</para>
+
+ <screen>&prompt.root; <userinput>zfs umount example/compressed</userinput>
+&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235232 1628716 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032864 48737580 2% /usr
+example 17547008 0 17547008 0% /example</screen>
+
+ <para>Re-mount the file system to make it accessible
+ again, and verify with <command>df</command>:</para>
+
+ <screen>&prompt.root; <userinput>zfs mount example/compressed</userinput>
+&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235234 1628714 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032864 48737580 2% /usr
+example 17547008 0 17547008 0% /example
+example/compressed 17547008 0 17547008 0% /example/compressed</screen>
+
+ <para>The pool and file system may also be observed by viewing
+ the output from <command>mount</command>:</para>
+
+ <screen>&prompt.root; <userinput>mount</userinput>
+/dev/ad0s1a on / (ufs, local)
+devfs on /dev (devfs, local)
+/dev/ad0s1d on /usr (ufs, local, soft-updates)
+example on /example (zfs, local)
+example/data on /example/data (zfs, local)
+example/compressed on /example/compressed (zfs, local)</screen>
+
+ <para>As observed, <acronym>ZFS</acronym> file systems, after
+ creation, may be used like ordinary file systems; however,
+ many other features are also available. In the following
+ example, a new file system, <literal>data</literal> is
+ created. Important files will be stored here, so the file
+ system is set to keep two copies of each data block:</para>
+
+ <screen>&prompt.root; <userinput>zfs create example/data</userinput>
+&prompt.root; <userinput>zfs set copies=2 example/data</userinput></screen>
+
+ <para>It is now possible to see the data and space utilization
+ by issuing the <command>df</command> again:</para>
+
+ <screen>&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235234 1628714 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032864 48737580 2% /usr
+example 17547008 0 17547008 0% /example
+example/compressed 17547008 0 17547008 0% /example/compressed
+example/data 17547008 0 17547008 0% /example/data</screen>
+
+ <para>Notice that each file system on the pool has the same
+ amount of available space. This is the reason for using
+ the <command>df</command> through these examples, to show
+ that the file systems are using only the amount of space
+ they need and will all draw from the same pool.
+ The <acronym>ZFS</acronym> file system does away with concepts
+ such as volumes and partitions, and allows for several file
+ systems to occupy the same pool. Destroy the file systems,
+ and then destroy the pool as they are no longer
+ needed:</para>
+
+ <screen>&prompt.root; <userinput>zfs destroy example/compressed</userinput>
+&prompt.root; <userinput>zfs destroy example/data</userinput>
+&prompt.root; <userinput>zpool destroy example</userinput></screen>
+
+ <para>Disks go bad and fail, an unavoidable trait. When
+ this disk goes bad, the data will be lost. One method of
+ avoiding data loss due to a failed hard disk is to implement
+ a <acronym>RAID</acronym>. <acronym>ZFS</acronym> supports
+ this feature in its pool design which is covered in
+ the next section.</para>
+ </sect3>
+
+ <sect3>
+ <title><acronym>ZFS</acronym> RAID-Z</title>
+
+ <para>As previously noted, this section will assume that
+ two <acronym>SCSI</acronym> exists as devices
+ <devicename>da0</devicename> and
+ <devicename>da1</devicename>. To create a
+ <acronym>RAID</acronym>-Z pool, issue the following
+ command:</para>
+
+ <screen>&prompt.root; <userinput>zpool create storage raidz da0 da1</userinput></screen>
+
+ <para>The <literal>storage</literal> zpool should have been
+ created. This may be verified by using the &man.mount.8; and
+ &man.df.1; commands as before. More disk devices may have
+ been allocated by adding them to the end of the list above.
+ Make a new file system in the pool, called
+ <literal>home</literal> where user files will eventually be
+ placed:</para>
+
+ <screen>&prompt.root; <userinput>zfs create storage/home</userinput></screen>
+
+ <para>It is now possible to enable compression and keep extra
+ copies of the user's home directories and files. This may
+ be accomplished just as before using the following
+ commands:</para>
+
+ <screen>&prompt.root; <userinput>zfs set copies=2 storage/home</userinput>
+&prompt.root; <userinput>zfs set compression=gzip storage/home</userinput></screen>
+
+ <para>To make this the new home directory for users, copy the
+ user data to this directory, and create the appropriate
+ symbolic links:</para>
+
+ <screen>&prompt.root; <userinput>cp -rp /home/* /storage/home</userinput>
+&prompt.root; <userinput>rm -rf /home /usr/home</userinput>
+&prompt.root; <userinput>ln -s /storage/home /home</userinput>
+&prompt.root; <userinput>ln -s /storage/home /usr/home</userinput></screen>
+
+ <para>Users should now have their data stored on the freshly
+ created <filename class="directory">/storage/home</filename>
+ file system. Test by adding a new user and logging in as
+ that user.</para>
+
+ <para>Try creating a snapshot which may be rolled back
+ later:</para>
+
+ <screen>&prompt.root; <userinput>zfs snapshot storage/home@08-30-08</userinput></screen>
+
+ <para>Note that the snapshot option will only capture a real
+ file system, not a home directory or a file. The
+ <literal>@</literal> character is a delimiter used between
+ the file system name or the volume name. When a user's
+ home directory gets trashed, restore it with:</para>
+
+ <screen>&prompt.root; <userinput>zfs rollback storage/home@08-30-08</userinput></screen>
+
+ <para>To get a list of all available snapshots, run the
+ <command>ls</command> in the file system's
+ <filename class="directory">.zfs/snapshot</filename>
+ directory. For example, to see the previously taken
+ snapshot, perform the following command:</para>
+
+ <screen>&prompt.root; <userinput>ls /storage/home/.zfs/snapshot</userinput></screen>
+
+ <para>It is possible to write a script to perform monthly
+ snapshots on user data; however, over time, snapshots
+ may consume a great deal of disk space. The previous
+ snapshot may be removed using the following command:</para>
+
+ <screen>&prompt.root; <userinput>zfs destroy storage/home@08-30-08</userinput></screen>
+
+ <para>There is no reason, after all of this testing, we should
+ keep <filename class="directory">/storage/home</filename>
+ around in its present state. Make it the real
+ <filename class="directory">/home</filename> file
+ system:</para>
+
+ <screen>&prompt.root; <userinput>zfs set mountpoint=/home storage/home</userinput></screen>
+
+ <para>Issuing the <command>df</command> and
+ <command>mount</command> commands will show that the system
+ now treats our file system as the real
+ <filename class="directory">/home</filename>:</para>
+
+ <screen>&prompt.root; <userinput>mount</userinput>
+/dev/ad0s1a on / (ufs, local)
+devfs on /dev (devfs, local)
+/dev/ad0s1d on /usr (ufs, local, soft-updates)
+storage on /storage (zfs, local)
+storage/home on /home (zfs, local)
+&prompt.root; <userinput>df</userinput>
+Filesystem 1K-blocks Used Avail Capacity Mounted on
+/dev/ad0s1a 2026030 235240 1628708 13% /
+devfs 1 1 0 100% /dev
+/dev/ad0s1d 54098308 1032826 48737618 2% /usr
+storage 17547008 0 17547008 0% /storage
+storage/home 17547008 0 17547008 0% /home</screen>
+
+ <para>This completes the <acronym>RAID</acronym>-Z
+ configuration. To get status updates about the file systems
+ created during the nightly &man.periodic.8; runs, issue the
+ following command:</para>
+
+ <screen>&prompt.root; <userinput>echo 'daily_status_zfs_enable="YES"' &gt;&gt; /etc/periodic.conf</userinput></screen>
+ </sect3>
+
+ <sect3>
+ <title>Recovering <acronym>RAID</acronym>-Z</title>
+
+ <para>Every software <acronym>RAID</acronym> has a method of
+ monitoring their <literal>state</literal>.
+ <acronym>ZFS</acronym> is no exception. The status of
+ <acronym>RAID</acronym>-Z devices may be viewed with the
+ following command:</para>
+
+ <screen>&prompt.root; <userinput>zpool status -x</userinput></screen>
+
+ <para>If all pools are healthy and everything is normal, the
+ following message will be returned:</para>
+
+ <screen>all pools are healthy</screen>
+
+ <para>If there is an issue, perhaps a disk has gone offline,
+ the pool state will be returned and look similar to:</para>
+
+ <screen> pool: storage
+ state: DEGRADED
+status: One or more devices has been taken offline by the administrator.
+ Sufficient replicas exist for the pool to continue functioning in a
+ degraded state.
+action: Online the device using 'zpool online' or replace the device with
+ 'zpool replace'.
+ scrub: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ storage DEGRADED 0 0 0
+ raidz1 DEGRADED 0 0 0
+ da0 ONLINE 0 0 0
+ da1 OFFLINE 0 0 0
+
+errors: No known data errors</screen>
+
+ <para>This states that the device was taken offline by the
+ administrator. This is true for this particular example.
+ To take the disk offline, the following command was
+ used:</para>
+
+ <screen>&prompt.root; <userinput>zpool offline storage da1</userinput></screen>
+
+ <para>It is now possible to replace the
+ <devicename>da1</devicename> after the system has been
+ powered down. When the system is back online, the following
+ command may issued to replace the disk:</para>
+
+ <screen>&prompt.root; <userinput>zpool replace storage da1</userinput></screen>
+
+ <para>From here, the status may be checked again, this time
+ without the <option>-x</option> flag to get state
+ information:</para>
+
+ <screen>&prompt.root; <userinput>zpool status storage</userinput>
+ pool: storage
+ state: ONLINE
+ scrub: resilver completed with 0 errors on Sat Aug 30 19:44:11 2008
+config:
+
+ NAME STATE READ WRITE CKSUM
+ storage ONLINE 0 0 0
+ raidz1 ONLINE 0 0 0
+ da0 ONLINE 0 0 0
+ da1 ONLINE 0 0 0
+
+errors: No known data errors</screen>
+
+ <para>As shown from this example, everything appears to be
+ normal.</para>
+ </sect3>
+
+ <sect3>
+ <title>Data Verification</title>
+
+ <para>As previously mentioned, <acronym>ZFS</acronym> uses
+ <literal>checksums</literal> to verify the integrity of
+ stored data. They are enabled automatically upon creation
+ of file systems and may be disabled using the following
+ command:</para>
+
+ <screen>&prompt.root; <userinput>zfs set checksum=off storage/home</userinput></screen>
+
+ <para>This is not a wise idea; however, as checksums take
+ very little storage space and are more useful enabled. There
+ also appear to be no noticeable costs having them enabled.
+ While enabled, it is possible to have <acronym>ZFS</acronym>
+ check data integrity using checksum verification. This
+ process is known as <quote>scrubing.</quote> To verify the
+ data integrity of the <literal>storage</literal> pool, issue
+ the following command:</para>
+
+ <screen>&prompt.root; <userinput>zpool scrub storage</userinput></screen>
+
+ <para>This process may take considerable time depending on
+ the amount of data stored. It is also very
+ <acronym>I/O</acronym> intensive, so much that only one
+ of these operations may be run at any given time. After
+ the scrub has completed, the status is updated and may be
+ viewed by issuing a status request:</para>
+
+ <screen>&prompt.root; <userinput>zpool status storage</userinput>
+ pool: storage
+ state: ONLINE
+ scrub: scrub completed with 0 errors on Sat Aug 30 19:57:37 2008
+config:
+
+ NAME STATE READ WRITE CKSUM
+ storage ONLINE 0 0 0
+ raidz1 ONLINE 0 0 0
+ da0 ONLINE 0 0 0
+ da1 ONLINE 0 0 0
+
+errors: No known data errors</screen>
+
+ <para>The completion time is in plain view in this example.
+ This feature helps to ensure data integrity over a long
+ period of time.</para>
+
+ <para>There are many more options for the Z&nbsp;file system,
+ see the &man.zfs.8; and &man.zpool.8; manual
+ pages.</para>
+ </sect3>
+ </sect2>
+ </sect1>
+
+
+ <!--
+ XXXTR: stub sections (added later, as needed, as desire,
+ after I get opinions from -doc people):
+
+ Still need to discuss native and foreign file systems.
+
+ <sect1>
+ <title>Device File System</title>
+ </sect1>
+
+ <sect1>
+ <title>DOS and NTFS File Systems</title>
+ <para>This is a good section for those who transfer files, using
+ USB devices, from Windows to FreeBSD and vice-versa. My camera,
+ and many other cameras I have seen default to using FAT16. There
+ is (was?) a kde utility, I think called kamera, that could be used
+ to access camera devices. A section on this would be useful.</para>
+
+ <para>XXXTR: Though! The disks chapter, covers a bit of this and
+ devfs under it's USB devices. It leaves a lot to be desired though,
+ see:
+http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/usb-disks.html
+ It may be better to flesh out that section a bit more. Add the
+ word "camera" to it so that others can easily notice.</para>
+ </sect1>
+
+ <sect1>
+ <title>Linux EXT File System</title>
+
+ <para>Probably NOT as useful as the other two, but it requires
+ knowledge of the existence of the tools. Which are hidden in
+ the ports collection. Most Linux guys would probably only use
+ Linux, BSD guys would be smarter and use NFS.</para>
+ </sect1>
+
+ <sect1>
+ <title>HFS</title>
+
+ <para>I think this is the file system used on Apple OSX. There are
+ tools in the ports collection, and with Apple being a big
+ FreeBSD supporter and user of our technologies, surely there
+ is enough cross over to cover this?</para>
+ </sect1>
+ -->
+
+</chapter>