diff options
author | Gabor Kovesdan <gabor@FreeBSD.org> | 2012-10-01 09:53:01 +0000 |
---|---|---|
committer | Gabor Kovesdan <gabor@FreeBSD.org> | 2012-10-01 09:53:01 +0000 |
commit | b4346b9b2dfe86a97907573086dff096850dcb1d (patch) | |
tree | 9b951977cbd22dada9b868ac83b1d56791ea3859 /en_US.ISO8859-1/books/handbook/geom/chapter.xml | |
parent | bee5d224febbeba11356aa848006a4f5f9e24b30 (diff) | |
download | doc-b4346b9b2dfe86a97907573086dff096850dcb1d.tar.gz doc-b4346b9b2dfe86a97907573086dff096850dcb1d.zip |
- Rename .sgml files to .xml
- Reflect the rename in referencing files
Approved by: doceng (implicit)
Notes
Notes:
svn path=/head/; revision=39631
Diffstat (limited to 'en_US.ISO8859-1/books/handbook/geom/chapter.xml')
-rw-r--r-- | en_US.ISO8859-1/books/handbook/geom/chapter.xml | 981 |
1 files changed, 981 insertions, 0 deletions
diff --git a/en_US.ISO8859-1/books/handbook/geom/chapter.xml b/en_US.ISO8859-1/books/handbook/geom/chapter.xml new file mode 100644 index 0000000000..ff7ec9a29e --- /dev/null +++ b/en_US.ISO8859-1/books/handbook/geom/chapter.xml @@ -0,0 +1,981 @@ +<?xml version="1.0" encoding="iso-8859-1" standalone="no"?> +<!-- + The FreeBSD Documentation Project + $FreeBSD$ + +--> + +<chapter id="GEOM"> + <chapterinfo> + <authorgroup> + <author> + <firstname>Tom</firstname> + <surname>Rhodes</surname> + <contrib>Written by </contrib> + </author> + </authorgroup> + </chapterinfo> + + <title>GEOM: Modular Disk Transformation Framework</title> + + <sect1 id="GEOM-synopsis"> + <title>Synopsis</title> + + <indexterm> + <primary>GEOM</primary> + </indexterm> + <indexterm> + <primary>GEOM Disk Framework</primary> + <see>GEOM</see> + </indexterm> + + <para>This chapter covers the use of disks under the GEOM + framework in &os;. This includes the major <acronym + role="Redundant Array of Inexpensive Disks">RAID</acronym> + control utilities which use the framework for configuration. + This chapter will not go into in depth discussion on how GEOM + handles or controls I/O, the underlying subsystem, or code. + This information is provided through the &man.geom.4; manual + page and its various SEE ALSO references. This chapter is also + not a definitive guide to <acronym>RAID</acronym> + configurations. Only GEOM-supported <acronym>RAID</acronym> + classifications will be discussed.</para> + + <para>After reading this chapter, you will know:</para> + + <itemizedlist> + <listitem> + <para>What type of <acronym>RAID</acronym> support is + available through GEOM.</para> + </listitem> + + <listitem> + <para>How to use the base utilities to configure, maintain, + and manipulate the various <acronym>RAID</acronym> + levels.</para> + </listitem> + + <listitem> + <para>How to mirror, stripe, encrypt, and remotely connect + disk devices through GEOM.</para> + </listitem> + + <listitem> + <para>How to troubleshoot disks attached to the GEOM + framework.</para> + </listitem> + </itemizedlist> + + <para>Before reading this chapter, you should:</para> + + <itemizedlist> + <listitem> + <para>Understand how &os; treats disk devices + (<xref linkend="disks"/>).</para> + </listitem> + + <listitem> + <para>Know how to configure and install a new &os; kernel + (<xref linkend="kernelconfig"/>).</para> + </listitem> + </itemizedlist> + </sect1> + + <sect1 id="GEOM-intro"> + <title>GEOM Introduction</title> + + <para>GEOM permits access and control to classes — Master + Boot Records, <acronym>BSD</acronym> labels, etc — through + the use of providers, or the special files in + <filename class="directory">/dev</filename>. Supporting various + software <acronym>RAID</acronym> configurations, GEOM will + transparently provide access to the operating system and + operating system utilities.</para> + </sect1> + + <sect1 id="GEOM-striping"> + <sect1info> + <authorgroup> + <author> + <firstname>Tom</firstname> + <surname>Rhodes</surname> + <contrib>Written by </contrib> + </author> + <author> + <firstname>Murray</firstname> + <surname>Stokely</surname> + </author> + </authorgroup> + </sect1info> + + <title>RAID0 - Striping</title> + + <indexterm> + <primary>GEOM</primary> + </indexterm> + <indexterm> + <primary>Striping</primary> + </indexterm> + + <para>Striping is a method used to combine several disk drives + into a single volume. In many cases, this is done through the + use of hardware controllers. The GEOM disk subsystem provides + software support for <acronym>RAID</acronym>0, also known as + disk striping.</para> + + <para>In a <acronym>RAID</acronym>0 system, data are split up in + blocks that get written across all the drives in the array. + Instead of having to wait on the system to write 256k to one + disk, a <acronym>RAID</acronym>0 system can simultaneously write + 64k to each of four different disks, offering superior I/O + performance. This performance can be enhanced further by using + multiple disk controllers.</para> + + <para>Each disk in a <acronym>RAID</acronym>0 stripe must be of + the same size, since I/O requests are interleaved to read or + write to multiple disks in parallel.</para> + + <mediaobject> + <imageobject> + <imagedata fileref="geom/striping" align="center"/> + </imageobject> + + <textobject> + <phrase>Disk Striping Illustration</phrase> + </textobject> + </mediaobject> + + <procedure> + <title>Creating a Stripe of Unformatted ATA Disks</title> + + <step> + <para>Load the <filename>geom_stripe.ko</filename> + module:</para> + + <screen>&prompt.root; <userinput>kldload geom_stripe</userinput></screen> + </step> + + <step> + <para>Ensure that a suitable mount point exists. If this + volume will become a root partition, then temporarily use + another mount point such as <filename + class="directory">/mnt</filename>:</para> + + <screen>&prompt.root; <userinput>mkdir /mnt</userinput></screen> + </step> + + <step> + <para>Determine the device names for the disks which will + be striped, and create the new stripe device. For example, + to stripe two unused and unpartitioned + <acronym>ATA</acronym> disks, for example + <filename>/dev/ad2</filename> and + <filename>/dev/ad3</filename>:</para> + + <screen>&prompt.root; <userinput>gstripe label -v st0 /dev/ad2 /dev/ad3</userinput> +Metadata value stored on /dev/ad2. +Metadata value stored on /dev/ad3. +Done.</screen> + </step> + + <step> + <para>Write a standard label, also known as a partition + table, on the new volume and install the default + bootstrap code:</para> + + <screen>&prompt.root; <userinput>bsdlabel -wB /dev/stripe/st0</userinput></screen> + </step> + + <step> + <para>This process should have created two other devices + in the <filename class="directory">/dev/stripe</filename> + directory in addition to the <devicename>st0</devicename> + device. Those include <devicename>st0a</devicename> and + <devicename>st0c</devicename>. At this point a file system + may be created on the <devicename>st0a</devicename> device + with the <command>newfs</command> utility:</para> + + <screen>&prompt.root; <userinput>newfs -U /dev/stripe/st0a</userinput></screen> + + <para>Many numbers will glide across the screen, and after a + few seconds, the process will be complete. The volume has + been created and is ready to be mounted.</para> + </step> + </procedure> + + <para>To manually mount the created disk stripe:</para> + + <screen>&prompt.root; <userinput>mount /dev/stripe/st0a /mnt</userinput></screen> + + <para>To mount this striped file system automatically during the + boot process, place the volume information in + <filename>/etc/fstab</filename> file. For this purpose, a + permanent mount point, named + <filename class="directory">stripe</filename>, is + created:</para> + + <screen>&prompt.root; <userinput>mkdir /stripe</userinput> +&prompt.root; <userinput>echo "/dev/stripe/st0a /stripe ufs rw 2 2" \</userinput> + <userinput>>> /etc/fstab</userinput></screen> + + <para>The <filename>geom_stripe.ko</filename> module must also be + automatically loaded during system initialization, by adding a + line to <filename>/boot/loader.conf</filename>:</para> + + <screen>&prompt.root; <userinput>echo 'geom_stripe_load="YES"' >> /boot/loader.conf</userinput></screen> + </sect1> + + <sect1 id="GEOM-mirror"> + <title>RAID1 - Mirroring</title> + + <indexterm> + <primary>GEOM</primary> + </indexterm> + <indexterm> + <primary>Disk Mirroring</primary> + </indexterm> + + <para>Mirroring is a technology used by many corporations and home + users to back up data without interruption. When a mirror + exists, it simply means that diskB replicates diskA. Or, + perhaps diskC+D replicates diskA+B. Regardless of the disk + configuration, the important aspect is that information on one + disk or partition is being replicated. Later, that information + could be more easily restored, backed up without causing service + or access interruption, and even be physically stored in a data + safe.</para> + + <para>To begin, ensure the system has two disk drives of equal + size, these exercises assume they are direct access (&man.da.4;) + <acronym>SCSI</acronym> disks.</para> + + <sect2> + <title>Mirroring Primary Disks</title> + + <para>Assuming &os; has been installed on the first, + <devicename>da0</devicename> disk device, &man.gmirror.8; + should be told to store its primary data there.</para> + + <para>Before building the mirror, enable additional debugging + information and opening access to the device by setting the + <varname>kern.geom.debugflags</varname> &man.sysctl.8; option + to the following value:</para> + + <screen>&prompt.root; <userinput>sysctl kern.geom.debugflags=17</userinput></screen> + + <para>Now create the mirror. Begin the process by storing + meta-data information on the primary disk device, + effectively creating the + <filename class="devicefile">/dev/mirror/gm</filename> device + using the following command:</para> + + <warning> + <para>Creating a mirror out of the boot drive may result in + data loss if any data has been stored on the last sector of + the disk. This risk is reduced if creating the mirror is + done promptly after a fresh install of &os;. The following + procedure is also incompatible with the default installation + settings of &os; 9.<replaceable>X</replaceable> which + use the new <acronym>GPT</acronym> partition scheme. GEOM + will overwrite <acronym>GPT</acronym> metadata, causing data + loss and possibly an unbootable system.</para> + </warning> + + <screen>&prompt.root; <userinput>gmirror label -vb round-robin gm0 /dev/da0</userinput></screen> + + <para>The system should respond with:</para> + + <screen>Metadata value stored on /dev/da0. +Done.</screen> + + <para>Initialize GEOM, this will load the + <filename>/boot/kernel/geom_mirror.ko</filename> kernel + module:</para> + + <screen>&prompt.root; <userinput>gmirror load</userinput></screen> + + <note> + <para>When this command completes successfully, it creates the + <devicename>gm0</devicename> device node under the + <filename class="directory">/dev/mirror</filename> + directory.</para> + </note> + + <para>Enable loading of the <filename>geom_mirror.ko</filename> + kernel module during system initialization:</para> + + <screen>&prompt.root; <userinput>echo 'geom_mirror_load="YES"' >> /boot/loader.conf</userinput></screen> + + <para>Edit the <filename>/etc/fstab</filename> file, replacing + references to the old <devicename>da0</devicename> with the + new device nodes of the <devicename>gm0</devicename> mirror + device.</para> + + <note> + <para>If &man.vi.1; is your preferred editor, the following is + an easy way to accomplish this task:</para> + + <screen>&prompt.root; <userinput>vi /etc/fstab</userinput></screen> + + <para>In &man.vi.1; back up the current contents of + <filename>fstab</filename> by typing + <userinput>:w /etc/fstab.bak</userinput>. Then + replace all old <devicename>da0</devicename> references + with <devicename>gm0</devicename> by typing + <userinput>:%s/da/mirror\/gm/g</userinput>.</para> + </note> + + <para>The resulting <filename>fstab</filename> file should look + similar to the following. It does not matter if the disk + drives are <acronym>SCSI</acronym> or <acronym>ATA</acronym>, + the <acronym>RAID</acronym> device will be + <devicename>gm</devicename> regardless.</para> + + <programlisting># Device Mountpoint FStype Options Dump Pass# +/dev/mirror/gm0s1b none swap sw 0 0 +/dev/mirror/gm0s1a / ufs rw 1 1 +/dev/mirror/gm0s1d /usr ufs rw 0 0 +/dev/mirror/gm0s1f /home ufs rw 2 2 +#/dev/mirror/gm0s2d /store ufs rw 2 2 +/dev/mirror/gm0s1e /var ufs rw 2 2 +/dev/acd0 /cdrom cd9660 ro,noauto 0 0</programlisting> + + <para>Reboot the system:</para> + + <screen>&prompt.root; <userinput>shutdown -r now</userinput></screen> + + <para>During system initialization, the + <devicename>gm0</devicename> should be used in place of the + <devicename>da0</devicename> device. Once fully initialized, + this may be checked by visually inspecting the output from + the <command>mount</command> command:</para> + + <screen>&prompt.root; <userinput>mount</userinput> +Filesystem 1K-blocks Used Avail Capacity Mounted on +/dev/mirror/gm0s1a 1012974 224604 707334 24% / +devfs 1 1 0 100% /dev +/dev/mirror/gm0s1f 45970182 28596 42263972 0% /home +/dev/mirror/gm0s1d 6090094 1348356 4254532 24% /usr +/dev/mirror/gm0s1e 3045006 2241420 559986 80% /var +devfs 1 1 0 100% /var/named/dev</screen> + + <para>The output looks good, as expected. Finally, to begin + synchronization, insert the <devicename>da1</devicename> disk + into the mirror using the following command:</para> + + <screen>&prompt.root; <userinput>gmirror insert gm0 /dev/da1</userinput></screen> + + <para>As the mirror is built the status may be checked using + the following command:</para> + + <screen>&prompt.root; <userinput>gmirror status</userinput></screen> + + <para>Once the mirror has been built and all current data + has been synchronized, the output from the above command + should look like:</para> + + <screen> Name Status Components +mirror/gm0 COMPLETE da0 + da1</screen> + + <para>If there are any issues, or the mirror is still + completing the build process, the example will show + <literal>DEGRADED</literal> in place of + <literal>COMPLETE</literal>.</para> + </sect2> + + <sect2> + <title>Troubleshooting</title> + + <sect3> + <title>System Refuses to Boot</title> + + <para>If the system boots up to a prompt similar to:</para> + + <programlisting>ffs_mountroot: can't find rootvp +Root mount failed: 6 +mountroot></programlisting> + + <para>Reboot the machine using the power or reset button. At + the boot menu, select option six (6). This will drop the + system to a &man.loader.8; prompt. Load the kernel module + manually:</para> + + <screen>OK? <userinput>load geom_mirror</userinput> +OK? <userinput>boot</userinput></screen> + + <para>If this works then for whatever reason the module was + not being loaded properly. Check whether the relevant entry + in <filename>/boot/loader.conf</filename> is correct. If + the problem persists, place:</para> + + <programlisting>options GEOM_MIRROR</programlisting> + + <para>in the kernel configuration file, rebuild and reinstall. + That should remedy this issue.</para> + </sect3> + </sect2> + + <sect2> + <title>Recovering from Disk Failure</title> + + <para>The wonderful part about disk mirroring is that when a + disk fails, it may be replaced, presumably, without losing + any data.</para> + + <para>Considering the previous <acronym>RAID</acronym>1 + configuration, assume that <devicename>da1</devicename> + has failed and now needs to be replaced. To replace it, + determine which disk has failed and power down the system. + At this point, the disk may be swapped with a new one and + the system brought back up. After the system has restarted, + the following commands may be used to replace the disk:</para> + + <screen>&prompt.root; <userinput>gmirror forget gm0</userinput></screen> + + <screen>&prompt.root; <userinput>gmirror insert gm0 /dev/da1</userinput></screen> + + <para>Use the <command>gmirror</command> <option>status</option> + command to monitor the progress of the rebuild. It is that + simple.</para> + </sect2> + </sect1> + + <sect1 id="GEOM-raid3"> + <sect1info> + <authorgroup> + <author> + <firstname>Mark</firstname> + <surname>Gladman</surname> + <contrib>Written by </contrib> + </author> + <author> + <firstname>Daniel</firstname> + <surname>Gerzo</surname> + </author> + </authorgroup> + <authorgroup> + <author> + <firstname>Tom</firstname> + <surname>Rhodes</surname> + <contrib>Based on documentation by </contrib> + </author> + <author> + <firstname>Murray</firstname> + <surname>Stokely</surname> + </author> + </authorgroup> + </sect1info> + + <title><acronym>RAID</acronym>3 - Byte-level Striping with Dedicated + Parity</title> + + <indexterm> + <primary>GEOM</primary> + </indexterm> + <indexterm> + <primary>RAID3</primary> + </indexterm> + + <para><acronym>RAID</acronym>3 is a method used to combine several + disk drives into a single volume with a dedicated parity + disk. In a <acronym>RAID</acronym>3 system, data is split up + into a number of bytes that are written across all the drives in + the array except for one disk which acts as a dedicated parity + disk. This means that reading 1024KB from a + <acronym>RAID</acronym>3 implementation will access all disks in + the array. Performance can be enhanced by using multiple + disk controllers. The <acronym>RAID</acronym>3 array provides a + fault tolerance of 1 drive, while providing a capacity of 1 - 1/n + times the total capacity of all drives in the array, where n is the + number of hard drives in the array. Such a configuration is + mostly suitable for storing data of larger sizes, e.g. + multimedia files.</para> + + <para>At least 3 physical hard drives are required to build a + <acronym>RAID</acronym>3 array. Each disk must be of the same + size, since I/O requests are interleaved to read or write to + multiple disks in parallel. Also, due to the nature of + <acronym>RAID</acronym>3, the number of drives must be + equal to 3, 5, 9, 17, etc. (2^n + 1).</para> + + <sect2> + <title>Creating a Dedicated <acronym>RAID</acronym>3 Array</title> + + <para>In &os;, support for <acronym>RAID</acronym>3 is + implemented by the &man.graid3.8; <acronym>GEOM</acronym> + class. Creating a dedicated + <acronym>RAID</acronym>3 array on &os; requires the following + steps.</para> + + <note> + <para>While it is theoretically possible to boot from a + <acronym>RAID</acronym>3 array on &os;, that configuration + is uncommon and is not advised.</para> + </note> + + <procedure> + <step> + <para>First, load the <filename>geom_raid3.ko</filename> + kernel module by issuing the following command:</para> + + <screen>&prompt.root; <userinput>graid3 load</userinput></screen> + + <para>Alternatively, it is possible to manually load the + <filename>geom_raid3.ko</filename> module:</para> + + <screen>&prompt.root; <userinput>kldload geom_raid3.ko</userinput></screen> + </step> + + <step> + <para>Create or ensure that a suitable mount point + exists:</para> + + <screen>&prompt.root; <userinput>mkdir <replaceable>/multimedia/</replaceable></userinput></screen> + </step> + + <step> + <para>Determine the device names for the disks which will be + added to the array, and create the new + <acronym>RAID</acronym>3 device. The final device listed + will act as the dedicated parity disk. This + example uses three unpartitioned + <acronym>ATA</acronym> drives: + <devicename><replaceable>ada1</replaceable></devicename> + and <devicename><replaceable>ada2</replaceable></devicename> + for data, and + <devicename><replaceable>ada3</replaceable></devicename> + for parity.</para> + + <screen>&prompt.root; <userinput>graid3 label -v gr0 /dev/ada1 /dev/ada2 /dev/ada3</userinput> +Metadata value stored on /dev/ada1. +Metadata value stored on /dev/ada2. +Metadata value stored on /dev/ada3. +Done.</screen> + </step> + + <step> + <para>Partition the newly created + <devicename>gr0</devicename> device and put a UFS file + system on it:</para> + + <screen>&prompt.root; <userinput>gpart create -s GPT /dev/raid3/gr0</userinput> +&prompt.root; <userinput>gpart add -t freebsd-ufs /dev/raid3/gr0</userinput> +&prompt.root; <userinput>newfs -j /dev/raid3/gr0p1</userinput></screen> + + <para>Many numbers will glide across the screen, and after a + bit of time, the process will be complete. The volume has + been created and is ready to be mounted.</para> + </step> + + <step> + <para>The last step is to mount the file system:</para> + + <screen>&prompt.root; <userinput>mount /dev/raid3/gr0p1 /multimedia/</userinput></screen> + + <para>The <acronym>RAID</acronym>3 array is now ready to + use.</para> + </step> + </procedure> + + <para>Additional configuration is needed to retain the above + setup across system reboots.</para> + + <procedure> + <step> + <para>The <filename>geom_raid3.ko</filename> module must be + loaded before the array can be mounted. To automatically + load the kernel module during the system initialization, add + the following line to the + <filename>/boot/loader.conf</filename> file:</para> + + <programlisting>geom_raid3_load="YES"</programlisting> + </step> + + <step> + <para>The following volume information must be added to the + <filename>/etc/fstab</filename> file in order to + automatically mount the array's file system during + the system boot process:</para> + + <programlisting>/dev/raid3/gr0p1 /multimedia ufs rw 2 2</programlisting> + </step> + </procedure> + </sect2> + </sect1> + + <sect1 id="geom-ggate"> + <title>GEOM Gate Network Devices</title> + + <para>GEOM supports the remote use of devices, such as disks, + CD-ROMs, files, etc. through the use of the gate utilities. + This is similar to <acronym>NFS</acronym>.</para> + + <para>To begin, an exports file must be created. This file + specifies who is permitted to access the exported resources and + what level of access they are offered. For example, to export + the fourth slice on the first <acronym>SCSI</acronym> disk, the + following <filename>/etc/gg.exports</filename> is more than + adequate:</para> + + <programlisting>192.168.1.0/24 RW /dev/da0s4d</programlisting> + + <para>It will allow all hosts inside the private network access + the file system on the <devicename>da0s4d</devicename> + partition.</para> + + <para>To export this device, ensure it is not currently mounted, + and start the &man.ggated.8; server daemon:</para> + + <screen>&prompt.root; <userinput>ggated</userinput></screen> + + <para>Now to <command>mount</command> the device on the client + machine, issue the following commands:</para> + + <screen>&prompt.root; <userinput>ggatec create -o rw 192.168.1.1 /dev/da0s4d</userinput> +ggate0 +&prompt.root; <userinput>mount /dev/ggate0 /mnt</userinput></screen> + + <para>From here on, the device may be accessed through the + <filename class="directory">/mnt</filename> mount point.</para> + + <note> + <para>It should be pointed out that this will fail if the device + is currently mounted on either the server machine or any other + machine on the network.</para> + </note> + + <para>When the device is no longer needed, it may be safely + unmounted with the &man.umount.8; command, similar to any other + disk device.</para> + </sect1> + + <sect1 id="geom-glabel"> + <title>Labeling Disk Devices</title> + + <indexterm> + <primary>GEOM</primary> + </indexterm> + <indexterm> + <primary>Disk Labels</primary> + </indexterm> + + <para>During system initialization, the &os; kernel will create + device nodes as devices are found. This method of probing for + devices raises some issues, for instance what if a new disk + device is added via <acronym>USB</acronym>? It is very likely + that a flash device may be handed the device name of + <devicename>da0</devicename> and the original + <devicename>da0</devicename> shifted to + <devicename>da1</devicename>. This will cause issues mounting + file systems if they are listed in + <filename>/etc/fstab</filename>, effectively, this may also + prevent the system from booting.</para> + + <para>One solution to this issue is to chain the + <acronym>SCSI</acronym> devices in order so a new device added + to the <acronym>SCSI</acronym> card will be issued unused device + numbers. But what about <acronym>USB</acronym> devices which + may replace the primary <acronym>SCSI</acronym> disk? This + happens because <acronym>USB</acronym> devices are usually + probed before the <acronym>SCSI</acronym> card. One solution is + to only insert these devices after the system has been booted. + Another method could be to use only a single + <acronym>ATA</acronym> drive and never list the + <acronym>SCSI</acronym> devices in + <filename>/etc/fstab</filename>.</para> + + <para>A better solution is available. By using the + <command>glabel</command> utility, an administrator or user may + label their disk devices and use these labels in + <filename>/etc/fstab</filename>. Because + <command>glabel</command> stores the label in the last sector of + a given provider, the label will remain persistent across + reboots. By using this label as a device, the file system may + always be mounted regardless of what device node it is accessed + through.</para> + + <note> + <para>This goes without saying that a label be permanent. The + <command>glabel</command> utility may be used to create both a + transient and permanent label. Only the permanent label will + remain consistent across reboots. See the &man.glabel.8; + manual page for more information on the differences between + labels.</para> + </note> + + <sect2> + <title>Label Types and Examples</title> + + <para>There are two types of labels, a generic label and a + file system label. Labels can be permanent or temporary. + Permanent labels can be created with the &man.tunefs.8; + or &man.newfs.8; commands. They will then be created + in a sub-directory of + <filename class="directory">/dev</filename>, which will be + named according to their file system type. For example, + <acronym>UFS</acronym>2 file system labels will be created in + the <filename class="directory">/dev/ufs</filename> directory. + Permanent labels can also be created with the <command>glabel + label</command> command. These are not file system specific, + and will be created in the + <filename class="directory">/dev/label</filename> + directory.</para> + + <para>A temporary label will go away with the next reboot. + These labels will be created in the + <filename class="directory">/dev/label</filename> directory + and are perfect for experimentation. A temporary label can be + created using the <command>glabel create</command> command. + For more information, please read the manual page of + &man.glabel.8;.</para> + +<!-- XXXTR: How do you create a file system label without running newfs + or when there is no newfs (e.g.: cd9660)? --> + + <para>To create a permanent label for a + <acronym>UFS</acronym>2 file system without destroying any + data, issue the following command:</para> + + <screen>&prompt.root; <userinput>tunefs -L <replaceable>home</replaceable> <replaceable>/dev/da3</replaceable></userinput></screen> + + <warning> + <para>If the file system is full, this may cause data + corruption; however, if the file system is full then the + main goal should be removing stale files and not adding + labels.</para> + </warning> + + <para>A label should now exist in + <filename class="directory">/dev/ufs</filename> which may be + added to <filename>/etc/fstab</filename>:</para> + + <programlisting>/dev/ufs/home /home ufs rw 2 2</programlisting> + + <note> + <para>The file system must not be mounted while attempting + to run <command>tunefs</command>.</para> + </note> + + <para>Now the file system may be mounted like normal:</para> + + <screen>&prompt.root; <userinput>mount /home</userinput></screen> + + <para>From this point on, so long as the + <filename>geom_label.ko</filename> kernel module is loaded at + boot with <filename>/boot/loader.conf</filename> or the + <literal>GEOM_LABEL</literal> kernel option is present, + the device node may change without any ill effect on the + system.</para> + + <para>File systems may also be created with a default label + by using the <option>-L</option> flag with + <command>newfs</command>. See the &man.newfs.8; manual page + for more information.</para> + + <para>The following command can be used to destroy the + label:</para> + + <screen>&prompt.root; <userinput>glabel destroy home</userinput></screen> + + <para>The following example shows how to label the partitions of + a boot disk.</para> + + <example> + <title>Labeling Partitions on the Boot Disk</title> + + <para>By permanently labeling the partitions on the boot disk, + the system should be able to continue to boot normally, even + if the disk is moved to another controller or transferred to + a different system. For this example, it is assumed that a + single <acronym>ATA</acronym> disk is used, which is + currently recognized by the system as + <devicename>ad0</devicename>. It is also assumed that the + standard &os; partition scheme is used, with + <filename class="directory">/</filename>, + <filename class="directory">/var</filename>, + <filename class="directory">/usr</filename> and + <filename class="directory">/tmp</filename> file systems, as + well as a swap partition.</para> + + <para>Reboot the system, and at the &man.loader.8; prompt, + press <keycap>4</keycap> to boot into single user mode. + Then enter the following commands:</para> + + <screen>&prompt.root; <userinput>glabel label rootfs /dev/ad0s1a</userinput> +GEOM_LABEL: Label for provider /dev/ad0s1a is label/rootfs +&prompt.root; <userinput>glabel label var /dev/ad0s1d</userinput> +GEOM_LABEL: Label for provider /dev/ad0s1d is label/var +&prompt.root; <userinput>glabel label usr /dev/ad0s1f</userinput> +GEOM_LABEL: Label for provider /dev/ad0s1f is label/usr +&prompt.root; <userinput>glabel label tmp /dev/ad0s1e</userinput> +GEOM_LABEL: Label for provider /dev/ad0s1e is label/tmp +&prompt.root; <userinput>glabel label swap /dev/ad0s1b</userinput> +GEOM_LABEL: Label for provider /dev/ad0s1b is label/swap +&prompt.root; <userinput>exit</userinput></screen> + + <para>The system will continue with multi-user boot. After + the boot completes, edit <filename>/etc/fstab</filename> and + replace the conventional device names, with their respective + labels. The final <filename>/etc/fstab</filename> file will + look like the following:</para> + + <programlisting># Device Mountpoint FStype Options Dump Pass# +/dev/label/swap none swap sw 0 0 +/dev/label/rootfs / ufs rw 1 1 +/dev/label/tmp /tmp ufs rw 2 2 +/dev/label/usr /usr ufs rw 2 2 +/dev/label/var /var ufs rw 2 2</programlisting> + + <para>The system can now be rebooted. If everything went + well, it will come up normally and <command>mount</command> + will show:</para> + + <screen>&prompt.root; <userinput>mount</userinput> +/dev/label/rootfs on / (ufs, local) +devfs on /dev (devfs, local) +/dev/label/tmp on /tmp (ufs, local, soft-updates) +/dev/label/usr on /usr (ufs, local, soft-updates) +/dev/label/var on /var (ufs, local, soft-updates)</screen> + </example> + + <para>Starting with &os; 7.2, the &man.glabel.8; class + supports a new label type for <acronym>UFS</acronym> file + systems, based on the unique file system id, + <literal>ufsid</literal>. These labels may be found in the + <filename class="directory">/dev/ufsid</filename> directory + and are created automatically during system startup. It is + possible to use <literal>ufsid</literal> labels to mount + partitions using the <filename>/etc/fstab</filename> facility. + Use the <command>glabel status</command> command to receive a + list of file systems and their corresponding + <literal>ufsid</literal> labels:</para> + + <screen>&prompt.user; <userinput>glabel status</userinput> + Name Status Components +ufsid/486b6fc38d330916 N/A ad4s1d +ufsid/486b6fc16926168e N/A ad4s1f</screen> + + <para>In the above example <devicename>ad4s1d</devicename> + represents the <filename class="directory">/var</filename> + file system, while <devicename>ad4s1f</devicename> represents + the <filename class="directory">/usr</filename> file system. + Using the <literal>ufsid</literal> values shown, these + partitions may now be mounted with the following entries in + <filename>/etc/fstab</filename>:</para> + + <programlisting>/dev/ufsid/486b6fc38d330916 /var ufs rw 2 2 +/dev/ufsid/486b6fc16926168e /usr ufs rw 2 2</programlisting> + + <para>Any partitions with <literal>ufsid</literal> labels can be + mounted in this way, eliminating the need to create permanent + labels for them manually, while still enjoying the benefits of + device-name independent mounting.</para> + </sect2> + </sect1> + + <sect1 id="geom-gjournal"> + <title>UFS Journaling Through GEOM</title> + + <indexterm> + <primary>GEOM</primary> + </indexterm> + <indexterm> + <primary>Journaling</primary> + </indexterm> + + <para>With the release of &os; 7.0, the long awaited feature + of journals has been implemented. The implementation itself is + provided through the <acronym>GEOM</acronym> subsystem and is + easily configured via the &man.gjournal.8; utility.</para> + + <para>What is journaling? Journaling capability stores a log of + file system transactions, i.e.: changes that make up a complete + disk write operation, before meta-data and file writes are + committed to the disk proper. This transaction log can later + be replayed to redo file system transactions, preventing file + system inconsistencies.</para> + + <para>This method is yet another mechanism to protect against data + loss and inconsistencies of the file system. Unlike Soft + Updates which tracks and enforces meta-data updates and + Snapshots which is an image of the file system, an actual log is + stored in disk space specifically reserved for this task, and in + some cases may be stored on another disk entirely.</para> + + <para>Unlike other file system journaling implementations, the + <command>gjournal</command> method is block based and not + implemented as part of the file system - only as a + <acronym>GEOM</acronym> extension.</para> + + <para>To enable support for <command>gjournal</command>, the + &os; kernel must have the following option - which is the + default on &os; 7.0 and later systems:</para> + + <programlisting>options UFS_GJOURNAL</programlisting> + + <para>If journaled volumes need to be mounted during startup, the + <filename>geom_journal.ko</filename> kernel module will also + have to be loaded, by adding the following line in + <filename>/boot/loader.conf</filename>:</para> + + <programlisting>geom_journal_load="YES"</programlisting> + + <para>Alternatively, this function can also be built into a custom + kernel, by adding the following line in the kernel configuration + file:</para> + + <programlisting>options GEOM_JOURNAL</programlisting> + + <para>Creating a journal on a free file system may now be done + using the following steps, considering that the + <devicename>da4</devicename> is a new <acronym>SCSI</acronym> + disk:</para> + + <screen>&prompt.root; <userinput>gjournal load</userinput> +&prompt.root; <userinput>gjournal label /dev/da4</userinput></screen> + + <para>At this point, there should be a + <devicename>/dev/da4</devicename> device node and a + <devicename>/dev/da4.journal</devicename> device node. + A file system may now be created on this device:</para> + + <screen>&prompt.root; <userinput>newfs -O 2 -J /dev/da4.journal</userinput></screen> + + <para>The previously issued command will create a + <acronym>UFS</acronym>2 file system on the journaled + device.</para> + + <para>Effectively <command>mount</command> the device at the + desired point with:</para> + + <screen>&prompt.root; <userinput>mount /dev/da4.journal <replaceable>/mnt</replaceable></userinput></screen> + + <note> + <para>In the case of several slices, a journal will be created + for each individual slice. For instance, if + <devicename>ad4s1</devicename> and + <devicename>ad4s2</devicename> are both slices, then + <command>gjournal</command> will create + <devicename>ad4s1.journal</devicename> and + <devicename>ad4s2.journal</devicename>.</para> + </note> + + <para>For better performance, keeping the journal on another disk + may be desired. For these cases, the journal provider or + storage device should be listed after the device to enable + journaling on. Journaling may also be enabled on current file + systems by using <command>tunefs</command>; however, always make + a backup before attempting to alter a file system. In most + cases, the <command>gjournal</command> will fail if it is unable + to create the actual journal but this does not protect against + data loss incurred as a result of misusing + <command>tunefs</command>.</para> + + <para>It is also possible to journal the boot disk of a &os; + system. Please refer to the article <ulink + url="&url.articles.gjournal-desktop;">Implementing UFS + Journaling on a Desktop PC</ulink> for detailed instructions + on this task.</para> + </sect1> +</chapter> |