aboutsummaryrefslogtreecommitdiff
path: root/sys/contrib/openzfs/man/man8/zpool.8
diff options
context:
space:
mode:
Diffstat (limited to 'sys/contrib/openzfs/man/man8/zpool.8')
-rw-r--r--sys/contrib/openzfs/man/man8/zpool.8568
1 files changed, 568 insertions, 0 deletions
diff --git a/sys/contrib/openzfs/man/man8/zpool.8 b/sys/contrib/openzfs/man/man8/zpool.8
new file mode 100644
index 000000000000..0fe6866f33da
--- /dev/null
+++ b/sys/contrib/openzfs/man/man8/zpool.8
@@ -0,0 +1,568 @@
+.\"
+.\" CDDL HEADER START
+.\"
+.\" The contents of this file are subject to the terms of the
+.\" Common Development and Distribution License (the "License").
+.\" You may not use this file except in compliance with the License.
+.\"
+.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
+.\" or http://www.opensolaris.org/os/licensing.
+.\" See the License for the specific language governing permissions
+.\" and limitations under the License.
+.\"
+.\" When distributing Covered Code, include this CDDL HEADER in each
+.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
+.\" If applicable, add the following below this CDDL HEADER, with the
+.\" fields enclosed by brackets "[]" replaced with your own identifying
+.\" information: Portions Copyright [yyyy] [name of copyright owner]
+.\"
+.\" CDDL HEADER END
+.\"
+.\"
+.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved.
+.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved.
+.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved.
+.\" Copyright (c) 2017 Datto Inc.
+.\" Copyright (c) 2018 George Melikov. All Rights Reserved.
+.\" Copyright 2017 Nexenta Systems, Inc.
+.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved.
+.\"
+.Dd August 9, 2019
+.Dt ZPOOL 8
+.Os
+.Sh NAME
+.Nm zpool
+.Nd configure ZFS storage pools
+.Sh SYNOPSIS
+.Nm
+.Fl ?V
+.Nm
+.Cm version
+.Nm
+.Cm <subcommand>
+.Op Ar <args>
+.Sh DESCRIPTION
+The
+.Nm
+command configures ZFS storage pools.
+A storage pool is a collection of devices that provides physical storage and
+data replication for ZFS datasets.
+All datasets within a storage pool share the same space.
+See
+.Xr zfs 8
+for information on managing datasets.
+.Pp
+For an overview of creating and managing ZFS storage pools see the
+.Xr zpoolconcepts 8
+manual page.
+.Sh SUBCOMMANDS
+All subcommands that modify state are logged persistently to the pool in their
+original form.
+.Pp
+The
+.Nm
+command provides subcommands to create and destroy storage pools, add capacity
+to storage pools, and provide information about the storage pools.
+The following subcommands are supported:
+.Bl -tag -width Ds
+.It Xo
+.Nm
+.Fl ?
+.Xc
+Displays a help message.
+.It Xo
+.Nm
+.Fl V, -version
+.Xc
+An alias for the
+.Nm zpool Cm version
+subcommand.
+.It Xo
+.Nm
+.Cm version
+.Xc
+Displays the software version of the
+.Nm
+userland utility and the zfs kernel module.
+.El
+.Ss Creation
+.Bl -tag -width Ds
+.It Xr zpool-create 8
+Creates a new storage pool containing the virtual devices specified on the
+command line.
+.It Xr zpool-initialize 8
+Begins initializing by writing to all unallocated regions on the specified
+devices, or all eligible devices in the pool if no individual devices are
+specified.
+.El
+.Ss Destruction
+.Bl -tag -width Ds
+.It Xr zpool-destroy 8
+Destroys the given pool, freeing up any devices for other use.
+.It Xr zpool-labelclear 8
+Removes ZFS label information from the specified
+.Ar device .
+.El
+.Ss Virtual Devices
+.Bl -tag -width Ds
+.It Xo
+.Xr zpool-attach 8 /
+.Xr zpool-detach 8
+.Xc
+Increases or decreases redundancy by
+.Cm attach Ns -ing or
+.Cm detach Ns -ing a device on an existing vdev (virtual device).
+.It Xo
+.Xr zpool-add 8 /
+.Xr zpool-remove 8
+.Xc
+Adds the specified virtual devices to the given pool,
+or removes the specified device from the pool.
+.It Xr zpool-replace 8
+Replaces an existing device (which may be faulted) with a new one.
+.It Xr zpool-split 8
+Creates a new pool by splitting all mirrors in an existing pool (which decreases its redundancy).
+.El
+.Ss Properties
+Available pool properties listed in the
+.Xr zpoolprops 8
+manual page.
+.Bl -tag -width Ds
+.It Xr zpool-list 8
+Lists the given pools along with a health status and space usage.
+.It Xo
+.Xr zpool-get 8 /
+.Xr zpool-set 8
+.Xc
+Retrieves the given list of properties
+.Po
+or all properties if
+.Sy all
+is used
+.Pc
+for the specified storage pool(s).
+.El
+.Ss Monitoring
+.Bl -tag -width Ds
+.It Xr zpool-status 8
+Displays the detailed health status for the given pools.
+.It Xr zpool-iostat 8
+Displays logical I/O statistics for the given pools/vdevs. Physical I/Os may
+be observed via
+.Xr iostat 1 .
+.It Xr zpool-events 8
+Lists all recent events generated by the ZFS kernel modules. These events
+are consumed by the
+.Xr zed 8
+and used to automate administrative tasks such as replacing a failed device
+with a hot spare. For more information about the subclasses and event payloads
+that can be generated see the
+.Xr zfs-events 5
+man page.
+.It Xr zpool-history 8
+Displays the command history of the specified pool(s) or all pools if no pool is
+specified.
+.El
+.Ss Maintenance
+.Bl -tag -width Ds
+.It Xr zpool-scrub 8
+Begins a scrub or resumes a paused scrub.
+.It Xr zpool-checkpoint 8
+Checkpoints the current state of
+.Ar pool
+, which can be later restored by
+.Nm zpool Cm import --rewind-to-checkpoint .
+.It Xr zpool-trim 8
+Initiates an immediate on-demand TRIM operation for all of the free space in
+a pool. This operation informs the underlying storage devices of all blocks
+in the pool which are no longer allocated and allows thinly provisioned
+devices to reclaim the space.
+.It Xr zpool-sync 8
+This command forces all in-core dirty data to be written to the primary
+pool storage and not the ZIL. It will also update administrative
+information including quota reporting. Without arguments,
+.Sy zpool sync
+will sync all pools on the system. Otherwise, it will sync only the
+specified pool(s).
+.It Xr zpool-upgrade 8
+Manage the on-disk format version of storage pools.
+.It Xr zpool-wait 8
+Waits until all background activity of the given types has ceased in the given
+pool.
+.El
+.Ss Fault Resolution
+.Bl -tag -width Ds
+.It Xo
+.Xr zpool-offline 8
+.Xr zpool-online 8
+.Xc
+Takes the specified physical device offline or brings it online.
+.It Xr zpool-resilver 8
+Starts a resilver. If an existing resilver is already running it will be
+restarted from the beginning.
+.It Xr zpool-reopen 8
+Reopen all the vdevs associated with the pool.
+.It Xr zpool-clear 8
+Clears device errors in a pool.
+.El
+.Ss Import & Export
+.Bl -tag -width Ds
+.It Xr zpool-import 8
+Make disks containing ZFS storage pools available for use on the system.
+.It Xr zpool-export 8
+Exports the given pools from the system.
+.It Xr zpool-reguid 8
+Generates a new unique identifier for the pool.
+.El
+.Sh EXIT STATUS
+The following exit values are returned:
+.Bl -tag -width Ds
+.It Sy 0
+Successful completion.
+.It Sy 1
+An error occurred.
+.It Sy 2
+Invalid command line options were specified.
+.El
+.Sh EXAMPLES
+.Bl -tag -width Ds
+.It Sy Example 1 No Creating a RAID-Z Storage Pool
+The following command creates a pool with a single raidz root vdev that
+consists of six disks.
+.Bd -literal
+# zpool create tank raidz sda sdb sdc sdd sde sdf
+.Ed
+.It Sy Example 2 No Creating a Mirrored Storage Pool
+The following command creates a pool with two mirrors, where each mirror
+contains two disks.
+.Bd -literal
+# zpool create tank mirror sda sdb mirror sdc sdd
+.Ed
+.It Sy Example 3 No Creating a ZFS Storage Pool by Using Partitions
+The following command creates an unmirrored pool using two disk partitions.
+.Bd -literal
+# zpool create tank sda1 sdb2
+.Ed
+.It Sy Example 4 No Creating a ZFS Storage Pool by Using Files
+The following command creates an unmirrored pool using files.
+While not recommended, a pool based on files can be useful for experimental
+purposes.
+.Bd -literal
+# zpool create tank /path/to/file/a /path/to/file/b
+.Ed
+.It Sy Example 5 No Adding a Mirror to a ZFS Storage Pool
+The following command adds two mirrored disks to the pool
+.Em tank ,
+assuming the pool is already made up of two-way mirrors.
+The additional space is immediately available to any datasets within the pool.
+.Bd -literal
+# zpool add tank mirror sda sdb
+.Ed
+.It Sy Example 6 No Listing Available ZFS Storage Pools
+The following command lists all available pools on the system.
+In this case, the pool
+.Em zion
+is faulted due to a missing device.
+The results from this command are similar to the following:
+.Bd -literal
+# zpool list
+NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE -
+tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE -
+zion - - - - - - - FAULTED -
+.Ed
+.It Sy Example 7 No Destroying a ZFS Storage Pool
+The following command destroys the pool
+.Em tank
+and any datasets contained within.
+.Bd -literal
+# zpool destroy -f tank
+.Ed
+.It Sy Example 8 No Exporting a ZFS Storage Pool
+The following command exports the devices in pool
+.Em tank
+so that they can be relocated or later imported.
+.Bd -literal
+# zpool export tank
+.Ed
+.It Sy Example 9 No Importing a ZFS Storage Pool
+The following command displays available pools, and then imports the pool
+.Em tank
+for use on the system.
+The results from this command are similar to the following:
+.Bd -literal
+# zpool import
+ pool: tank
+ id: 15451357997522795478
+ state: ONLINE
+action: The pool can be imported using its name or numeric identifier.
+config:
+
+ tank ONLINE
+ mirror ONLINE
+ sda ONLINE
+ sdb ONLINE
+
+# zpool import tank
+.Ed
+.It Sy Example 10 No Upgrading All ZFS Storage Pools to the Current Version
+The following command upgrades all ZFS Storage pools to the current version of
+the software.
+.Bd -literal
+# zpool upgrade -a
+This system is currently running ZFS version 2.
+.Ed
+.It Sy Example 11 No Managing Hot Spares
+The following command creates a new pool with an available hot spare:
+.Bd -literal
+# zpool create tank mirror sda sdb spare sdc
+.Ed
+.Pp
+If one of the disks were to fail, the pool would be reduced to the degraded
+state.
+The failed device can be replaced using the following command:
+.Bd -literal
+# zpool replace tank sda sdd
+.Ed
+.Pp
+Once the data has been resilvered, the spare is automatically removed and is
+made available for use should another device fail.
+The hot spare can be permanently removed from the pool using the following
+command:
+.Bd -literal
+# zpool remove tank sdc
+.Ed
+.It Sy Example 12 No Creating a ZFS Pool with Mirrored Separate Intent Logs
+The following command creates a ZFS storage pool consisting of two, two-way
+mirrors and mirrored log devices:
+.Bd -literal
+# zpool create pool mirror sda sdb mirror sdc sdd log mirror \\
+ sde sdf
+.Ed
+.It Sy Example 13 No Adding Cache Devices to a ZFS Pool
+The following command adds two disks for use as cache devices to a ZFS storage
+pool:
+.Bd -literal
+# zpool add pool cache sdc sdd
+.Ed
+.Pp
+Once added, the cache devices gradually fill with content from main memory.
+Depending on the size of your cache devices, it could take over an hour for
+them to fill.
+Capacity and reads can be monitored using the
+.Cm iostat
+option as follows:
+.Bd -literal
+# zpool iostat -v pool 5
+.Ed
+.It Sy Example 14 No Removing a Mirrored top-level (Log or Data) Device
+The following commands remove the mirrored log device
+.Sy mirror-2
+and mirrored top-level data device
+.Sy mirror-1 .
+.Pp
+Given this configuration:
+.Bd -literal
+ pool: tank
+ state: ONLINE
+ scrub: none requested
+config:
+
+ NAME STATE READ WRITE CKSUM
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ sda ONLINE 0 0 0
+ sdb ONLINE 0 0 0
+ mirror-1 ONLINE 0 0 0
+ sdc ONLINE 0 0 0
+ sdd ONLINE 0 0 0
+ logs
+ mirror-2 ONLINE 0 0 0
+ sde ONLINE 0 0 0
+ sdf ONLINE 0 0 0
+.Ed
+.Pp
+The command to remove the mirrored log
+.Sy mirror-2
+is:
+.Bd -literal
+# zpool remove tank mirror-2
+.Ed
+.Pp
+The command to remove the mirrored data
+.Sy mirror-1
+is:
+.Bd -literal
+# zpool remove tank mirror-1
+.Ed
+.It Sy Example 15 No Displaying expanded space on a device
+The following command displays the detailed information for the pool
+.Em data .
+This pool is comprised of a single raidz vdev where one of its devices
+increased its capacity by 10GB.
+In this example, the pool will not be able to utilize this extra capacity until
+all the devices under the raidz vdev have been expanded.
+.Bd -literal
+# zpool list -v data
+NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
+data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE -
+ raidz1 23.9G 14.6G 9.30G - 48%
+ sda - - - - -
+ sdb - - - 10G -
+ sdc - - - - -
+.Ed
+.It Sy Example 16 No Adding output columns
+Additional columns can be added to the
+.Nm zpool Cm status
+and
+.Nm zpool Cm iostat
+output with
+.Fl c
+option.
+.Bd -literal
+# zpool status -c vendor,model,size
+ NAME STATE READ WRITE CKSUM vendor model size
+ tank ONLINE 0 0 0
+ mirror-0 ONLINE 0 0 0
+ U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+ U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T
+
+# zpool iostat -vc size
+ capacity operations bandwidth
+pool alloc free read write read write size
+---------- ----- ----- ----- ----- ----- ----- ----
+rpool 14.6G 54.9G 4 55 250K 2.69M
+ sda1 14.6G 54.9G 4 55 250K 2.69M 70G
+---------- ----- ----- ----- ----- ----- ----- ----
+.Ed
+.El
+.Sh ENVIRONMENT VARIABLES
+.Bl -tag -width "ZFS_ABORT"
+.It Ev ZFS_ABORT
+Cause
+.Nm zpool
+to dump core on exit for the purposes of running
+.Sy ::findleaks .
+.El
+.Bl -tag -width "ZFS_COLOR"
+.It Ev ZFS_COLOR
+Use ANSI color in
+.Nm zpool status
+output.
+.El
+.Bl -tag -width "ZPOOL_IMPORT_PATH"
+.It Ev ZPOOL_IMPORT_PATH
+The search path for devices or files to use with the pool. This is a colon-separated list of directories in which
+.Nm zpool
+looks for device nodes and files.
+Similar to the
+.Fl d
+option in
+.Nm zpool import .
+.El
+.Bl -tag -width "ZPOOL_IMPORT_UDEV_TIMEOUT_MS"
+.It Ev ZPOOL_IMPORT_UDEV_TIMEOUT_MS
+The maximum time in milliseconds that
+.Nm zpool import
+will wait for an expected device to be available.
+.El
+.Bl -tag -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE"
+.It Ev ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE
+If set, suppress warning about non-native vdev ashift in
+.Nm zpool status .
+The value is not used, only the presence or absence of the variable matters.
+.El
+.Bl -tag -width "ZPOOL_VDEV_NAME_GUID"
+.It Ev ZPOOL_VDEV_NAME_GUID
+Cause
+.Nm zpool
+subcommands to output vdev guids by default. This behavior is identical to the
+.Nm zpool status -g
+command line option.
+.El
+.Bl -tag -width "ZPOOL_VDEV_NAME_FOLLOW_LINKS"
+.It Ev ZPOOL_VDEV_NAME_FOLLOW_LINKS
+Cause
+.Nm zpool
+subcommands to follow links for vdev names by default. This behavior is identical to the
+.Nm zpool status -L
+command line option.
+.El
+.Bl -tag -width "ZPOOL_VDEV_NAME_PATH"
+.It Ev ZPOOL_VDEV_NAME_PATH
+Cause
+.Nm zpool
+subcommands to output full vdev path names by default. This
+behavior is identical to the
+.Nm zpool status -P
+command line option.
+.El
+.Bl -tag -width "ZFS_VDEV_DEVID_OPT_OUT"
+.It Ev ZFS_VDEV_DEVID_OPT_OUT
+Older ZFS on Linux implementations had issues when attempting to display pool
+config VDEV names if a
+.Sy devid
+NVP value is present in the pool's config.
+.Pp
+For example, a pool that originated on illumos platform would have a devid
+value in the config and
+.Nm zpool status
+would fail when listing the config.
+This would also be true for future Linux based pools.
+.Pp
+A pool can be stripped of any
+.Sy devid
+values on import or prevented from adding
+them on
+.Nm zpool create
+or
+.Nm zpool add
+by setting
+.Sy ZFS_VDEV_DEVID_OPT_OUT .
+.El
+.Bl -tag -width "ZPOOL_SCRIPTS_AS_ROOT"
+.It Ev ZPOOL_SCRIPTS_AS_ROOT
+Allow a privileged user to run the
+.Nm zpool status/iostat
+with the
+.Fl c
+option. Normally, only unprivileged users are allowed to run
+.Fl c .
+.El
+.Bl -tag -width "ZPOOL_SCRIPTS_PATH"
+.It Ev ZPOOL_SCRIPTS_PATH
+The search path for scripts when running
+.Nm zpool status/iostat
+with the
+.Fl c
+option. This is a colon-separated list of directories and overrides the default
+.Pa ~/.zpool.d
+and
+.Pa /etc/zfs/zpool.d
+search paths.
+.El
+.Bl -tag -width "ZPOOL_SCRIPTS_ENABLED"
+.It Ev ZPOOL_SCRIPTS_ENABLED
+Allow a user to run
+.Nm zpool status/iostat
+with the
+.Fl c
+option. If
+.Sy ZPOOL_SCRIPTS_ENABLED
+is not set, it is assumed that the user is allowed to run
+.Nm zpool status/iostat -c .
+.El
+.Sh INTERFACE STABILITY
+.Sy Evolving
+.Sh SEE ALSO
+.Xr zpoolconcepts 8 ,
+.Xr zpoolprops 8 ,
+.Xr zfs-events 5 ,
+.Xr zfs-module-parameters 5 ,
+.Xr zpool-features 5 ,
+.Xr zed 8 ,
+.Xr zfs 8