diff options
Diffstat (limited to 'sys/contrib/openzfs/man/man8')
88 files changed, 14225 insertions, 0 deletions
diff --git a/sys/contrib/openzfs/man/man8/.gitignore b/sys/contrib/openzfs/man/man8/.gitignore new file mode 100644 index 000000000000..a468f9cbf9d3 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/.gitignore @@ -0,0 +1,3 @@ +/zed.8 +/zfs-mount-generator.8 +/zfs_prepare_disk.8 diff --git a/sys/contrib/openzfs/man/man8/fsck.zfs.8 b/sys/contrib/openzfs/man/man8/fsck.zfs.8 new file mode 100644 index 000000000000..624c797b5f02 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/fsck.zfs.8 @@ -0,0 +1,79 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright 2013 Darik Horn <dajhorn@vanadac.com>. All rights reserved. +.\" +.Dd May 26, 2021 +.Dt FSCK.ZFS 8 +.Os +. +.Sh NAME +.Nm fsck.zfs +.Nd dummy ZFS filesystem checker +.Sh SYNOPSIS +.Nm +.Op Ar options +.Ar dataset Ns No … +. +.Sh DESCRIPTION +.Nm +is a thin shell wrapper that at most checks the status of a dataset's container +pool. +It is installed by OpenZFS because some Linux +distributions expect a fsck helper for all filesystems. +.Pp +If more than one +.Ar dataset +is specified, each is checked in turn and the results binary-ored. +. +.Sh OPTIONS +Ignored. +. +.Sh NOTES +ZFS datasets are checked by running +.Nm zpool Cm scrub +on the containing pool. +An individual ZFS dataset is never checked independently of its pool, +which is unlike a regular filesystem. +.Pp +However, the +.Xr fsck 8 +interface still allows it to communicate some errors: if the +.Ar dataset +is in a degraded pool, then +.Nm +will return exit code +.Sy 4 +to indicate an uncorrected filesystem error. +.Pp +Similarly, if the +.Ar dataset +is in a faulted pool and has a legacy +.Pa /etc/fstab +record, then +.Nm +will return exit code +.Sy 8 +to indicate a fatal operational error. +.Sh SEE ALSO +.Xr fstab 5 , +.Xr fsck 8 , +.Xr zpool-scrub 8 diff --git a/sys/contrib/openzfs/man/man8/mount.zfs.8 b/sys/contrib/openzfs/man/man8/mount.zfs.8 new file mode 100644 index 000000000000..a3686cd64f80 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/mount.zfs.8 @@ -0,0 +1,93 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright 2013 Darik Horn <dajhorn@vanadac.com>. All rights reserved. +.\" +.Dd May 24, 2021 +.Dt MOUNT.ZFS 8 +.Os +. +.Sh NAME +.Nm mount.zfs +.Nd mount ZFS filesystem +.Sh SYNOPSIS +.Nm +.Op Fl sfnvh +.Op Fl o Ar options +.Ar dataset +.Ar mountpoint +. +.Sh DESCRIPTION +The +.Nm +helper is used by +.Xr mount 8 +to mount filesystem snapshots and +.Sy mountpoint= Ns Ar legacy +ZFS filesystems, as well as by +.Xr zfs 8 +when the +.Sy ZFS_MOUNT_HELPER +environment variable is not set. +Users should should invoke either +.Xr mount 8 +or +.Xr zfs 8 +in most cases. +.Pp +.Ar options +are handled according to the +.Em Temporary Mount Point Properties +section in +.Xr zfsprops 7 , +except for those described below. +.Pp +If +.Pa /etc/mtab +is a regular file and +.Fl n +was not specified, it will be updated via libmount. +. +.Sh OPTIONS +.Bl -tag -width "-o xa" +.It Fl s +Ignore unknown (sloppy) mount options. +.It Fl f +Do everything except actually executing the system call. +.It Fl n +Never update +.Pa /etc/mtab . +.It Fl v +Print resolved mount options and parser state. +.It Fl h +Print the usage message. +.It Fl o Ar zfsutil +This private flag indicates that +.Xr mount 8 +is being called by the +.Xr zfs 8 +command. +.El +. +.Sh SEE ALSO +.Xr fstab 5 , +.Xr mount 8 , +.Xr zfs-mount 8 diff --git a/sys/contrib/openzfs/man/man8/vdev_id.8 b/sys/contrib/openzfs/man/man8/vdev_id.8 new file mode 100644 index 000000000000..8e6b07dfc1e3 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/vdev_id.8 @@ -0,0 +1,96 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" This file and its contents are supplied under the terms of the +.\" Common Development and Distribution License ("CDDL"), version 1.0. +.\" You may only use this file in accordance with the terms of version +.\" 1.0 of the CDDL. +.\" +.\" A full copy of the text of the CDDL should have accompanied this +.\" source. A copy of the CDDL is also available via the Internet at +.\" http://www.illumos.org/license/CDDL. +.\" +.Dd May 26, 2021 +.Dt VDEV_ID 8 +.Os +. +.Sh NAME +.Nm vdev_id +.Nd generate user-friendly names for JBOD disks +.Sh SYNOPSIS +.Nm +.Fl d Ar dev +.Fl c Ar config_file +.Fl g Sy sas_direct Ns | Ns Sy sas_switch Ns | Ns Sy scsi +.Fl m +.Fl p Ar phys_per_port +. +.Sh DESCRIPTION +.Nm +is an udev helper which parses +.Xr vdev_id.conf 5 +to map a physical path in a storage topology to a channel name. +The channel name is combined with a disk enclosure slot number to create +an alias that reflects the physical location of the drive. +This is particularly helpful when it comes to tasks like replacing failed +drives. +Slot numbers may also be remapped in case the default numbering is +unsatisfactory. +The drive aliases will be created as symbolic links in +.Pa /dev/disk/by-vdev . +.Pp +The currently supported topologies are +.Sy sas_direct , +.Sy sas_switch , +and +.Sy scsi . +A multipath mode is supported in which dm-mpath devices are handled by +examining the first running component disk as reported by the driver. +In multipath mode the configuration file should contain a +channel definition with the same name for each path to a given +enclosure. +.Pp +.Nm +also supports creating aliases based on existing udev links in the /dev +hierarchy using the +.Sy alias +configuration file keyword. +See +.Xr vdev_id.conf 5 +for details. +. +.Sh OPTIONS +.Bl -tag -width "-m" +.It Fl d Ar device +The device node to classify, like +.Pa /dev/sda . +.It Fl c Ar config_file +Specifies the path to an alternate configuration file. +The default is +.Pa /etc/zfs/vdev_id.conf . +.It Fl g Sy sas_direct Ns | Ns Sy sas_switch Ns | Ns Sy scsi +Identifies a physical topology that governs how physical paths are +mapped to channels: +.Bl -tag -compact -width "sas_direct and scsi" +.It Sy sas_direct No and Sy scsi +channels are uniquely identified by a PCI slot and HBA port number +.It Sy sas_switch +channels are uniquely identified by a SAS switch port number +.El +.It Fl m +Only handle dm-multipath devices. +If specified, examine the first running component disk of a dm-multipath +device as provided by the driver to determine the physical path. +.It Fl p Ar phys_per_port +Specifies the number of PHY devices associated with a SAS HBA port or SAS +switch port. +.Nm +internally uses this value to determine which HBA or switch port a +device is connected to. +The default is +.Sy 4 . +.It Fl h +Print a usage summary. +.El +. +.Sh SEE ALSO +.Xr vdev_id.conf 5 diff --git a/sys/contrib/openzfs/man/man8/zdb.8 b/sys/contrib/openzfs/man/man8/zdb.8 new file mode 100644 index 000000000000..c3290ea14769 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zdb.8 @@ -0,0 +1,602 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" This file and its contents are supplied under the terms of the +.\" Common Development and Distribution License ("CDDL"), version 1.0. +.\" You may only use this file in accordance with the terms of version +.\" 1.0 of the CDDL. +.\" +.\" A full copy of the text of the CDDL should have accompanied this +.\" source. A copy of the CDDL is also available via the Internet at +.\" http://www.illumos.org/license/CDDL. +.\" +.\" Copyright 2012, Richard Lowe. +.\" Copyright (c) 2012, 2019 by Delphix. All rights reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Lawrence Livermore National Security, LLC. +.\" Copyright (c) 2017 Intel Corporation. +.\" +.Dd August 12, 2025 +.Dt ZDB 8 +.Os +. +.Sh NAME +.Nm zdb +.Nd display ZFS storage pool debugging and consistency information +.Sh SYNOPSIS +.Nm +.Op Fl AbcdDFGhikLMNPsTvXYy +.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns … +.Op Fl I Ar inflight-I/O-ops +.Oo Fl o Ar var Ns = Ns Ar value Oc Ns … +.Op Fl t Ar txg +.Op Fl U Ar cache +.Op Fl x Ar dumpdir +.Op Fl K Ar key +.Op Ar poolname Ns Op / Ns Ar dataset Ns | Ns Ar objset-ID +.Op Ar object Ns | Ns Ar range Ns … +.Nm +.Op Fl AdiPv +.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns … +.Op Fl U Ar cache +.Op Fl K Ar key +.Ar poolname Ns Op Ar / Ns Ar dataset Ns | Ns Ar objset-ID +.Op Ar object Ns | Ns Ar range Ns … +.Nm +.Fl B +.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns … +.Op Fl U Ar cache +.Op Fl K Ar key +.Ar poolname Ns Ar / Ns Ar objset-ID +.Op Ar backup-flags +.Nm +.Fl C +.Op Fl A +.Op Fl U Ar cache +.Op Ar poolname +.Nm +.Fl E +.Op Fl A +.Ar word0 : Ns Ar word1 Ns :…: Ns Ar word15 +.Nm +.Fl l +.Op Fl Aqu +.Ar device +.Nm +.Fl m +.Op Fl AFLPXY +.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns … +.Op Fl t Ar txg +.Op Fl U Ar cache +.Ar poolname Op Ar vdev Oo Ar metaslab Oc Ns … +.Nm +.Fl -allocated-map +.Op Fl mAFLPXY +.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns … +.Op Fl t Ar txg +.Op Fl U Ar cache +.Ar poolname Op Ar vdev Oo Ar metaslab Oc Ns … +.Nm +.Fl O +.Op Fl K Ar key +.Ar dataset path +.Nm +.Fl r +.Op Fl K Ar key +.Ar dataset path destination +.Nm +.Fl R +.Op Fl A +.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns … +.Op Fl U Ar cache +.Ar poolname vdev : Ns Ar offset : Ns Oo Ar lsize Ns / Oc Ns Ar psize Ns Op : Ns Ar flags +.Nm +.Fl S +.Op Fl AP +.Op Fl e Oo Fl V Oc Oo Fl p Ar path Oc Ns … +.Op Fl U Ar cache +.Ar poolname +. +.Sh DESCRIPTION +The +.Nm +utility displays information about a ZFS pool useful for debugging and performs +some amount of consistency checking. +It is a not a general purpose tool and options +.Pq and facilities +may change. +It is not a +.Xr fsck 8 +utility. +.Pp +The output of this command in general reflects the on-disk structure of a ZFS +pool, and is inherently unstable. +The precise output of most invocations is not documented, a knowledge of ZFS +internals is assumed. +.Pp +If the +.Ar dataset +argument does not contain any +.Qq Sy / +or +.Qq Sy @ +characters, it is interpreted as a pool name. +The root dataset can be specified as +.Qq Ar pool Ns / . +.Pp +.Nm +is an +.Qq offline +tool; it accesses the block devices underneath the pools directly from +userspace and does not care if the pool is imported or datasets are mounted +(or even if the system understands ZFS at all). +When operating on an imported and active pool it is possible, though unlikely, +that zdb may interpret inconsistent pool data and behave erratically. +. +.Sh OPTIONS +Display options: +.Bl -tag -width Ds +.It Fl Sy -allocated-map +Prints out a list of all the allocated regions in the pool. +Primarily intended for use with the +.Nm zhack metaslab leak +subcommand. +.It Fl b , -block-stats +Display statistics regarding the number, size +.Pq logical, physical and allocated +and deduplication of blocks. +.It Fl B , -backup +Generate a backup stream, similar to +.Nm zfs Cm send , +but for the numeric objset ID, and without opening the dataset. +This can be useful in recovery scenarios if dataset metadata has become +corrupted but the dataset itself is readable. +The optional +.Ar flags +argument is a string of one or more of the letters +.Sy e , +.Sy L , +.Sy c , +and +.Sy w , +which correspond to the same flags in +.Xr zfs-send 8 . +.It Fl c , -checksum +Verify the checksum of all metadata blocks while printing block statistics +.Po see +.Fl b +.Pc . +.Pp +If specified multiple times, verify the checksums of all blocks. +.It Fl C , -config +Display information about the configuration. +If specified with no other options, instead display information about the cache +file +.Pq Pa /etc/zfs/zpool.cache . +To specify the cache file to display, see +.Fl U . +.Pp +If specified multiple times, and a pool name is also specified display both the +cached configuration and the on-disk configuration. +If specified multiple times with +.Fl e +also display the configuration that would be used were the pool to be imported. +.It Fl d , -datasets +Display information about datasets. +Specified once, displays basic dataset information: ID, create transaction, +size, and object count. +See +.Fl N +for determining if +.Ar poolname Ns Op / Ns Ar dataset Ns | Ns Ar objset-ID +is to use the specified +.Ar dataset Ns | Ns Ar objset-ID +as a string (dataset name) or a number (objset ID) when +datasets have numeric names. +.Pp +If specified multiple times provides greater and greater verbosity. +.Pp +If object IDs or object ID ranges are specified, display information about +those specific objects or ranges only. +.Pp +An object ID range is specified in terms of a colon-separated tuple of +the form +.Ao start Ac : Ns Ao end Ac Ns Op : Ns Ao flags Ac . +The fields +.Ar start +and +.Ar end +are integer object identifiers that denote the upper and lower bounds +of the range. +An +.Ar end +value of -1 specifies a range with no upper bound. +The +.Ar flags +field optionally specifies a set of flags, described below, that control +which object types are dumped. +By default, all object types are dumped. +A minus sign +.Pq - +negates the effect of the flag that follows it and has no effect unless +preceded by the +.Ar A +flag. +For example, the range 0:-1:A-d will dump all object types except for +directories. +.Pp +.Bl -tag -compact -width Ds +.It Sy A +Dump all objects (this is the default) +.It Sy d +Dump ZFS directory objects +.It Sy f +Dump ZFS plain file objects +.It Sy m +Dump SPA space map objects +.It Sy z +Dump ZAP objects +.It Sy - +Negate the effect of next flag +.El +.It Fl D , -dedup-stats +Display deduplication statistics, including the deduplication ratio +.Pq Sy dedup , +compression ratio +.Pq Sy compress , +inflation due to the zfs copies property +.Pq Sy copies , +and an overall effective ratio +.Pq Sy dedup No \(mu Sy compress No / Sy copies . +.It Fl DD +Display a histogram of deduplication statistics, showing the allocated +.Pq physically present on disk +and referenced +.Pq logically referenced in the pool +block counts and sizes by reference count. +.It Fl DDD +Display the statistics independently for each deduplication table. +.It Fl DDDD +Dump the contents of the deduplication tables describing duplicate blocks. +.It Fl DDDDD +Also dump the contents of the deduplication tables describing unique blocks. +.It Fl E , -embedded-block-pointer Ns = Ns Ar word0 : Ns Ar word1 Ns :…: Ns Ar word15 +Decode and display block from an embedded block pointer specified by the +.Ar word +arguments. +.It Fl h , -history +Display pool history similar to +.Nm zpool Cm history , +but include internal changes, transaction, and dataset information. +.It Fl i , -intent-logs +Display information about intent log +.Pq ZIL +entries relating to each dataset. +If specified multiple times, display counts of each intent log transaction type. +.It Fl k , -checkpointed-state +Examine the checkpointed state of the pool. +Note, the on disk format of the pool is not reverted to the checkpointed state. +.It Fl l , -label Ns = Ns Ar device +Read the vdev labels and L2ARC header from the specified device. +.Nm Fl l +will return 0 if valid label was found, 1 if error occurred, and 2 if no valid +labels were found. +The presence of L2ARC header is indicated by a specific +sequence (L2ARC_DEV_HDR_MAGIC). +If there is an accounting error in the size or the number of L2ARC log blocks +.Nm Fl l +will return 1. +Each unique configuration is displayed only once. +.It Fl ll Ar device +In addition display label space usage stats. +If a valid L2ARC header was found +also display the properties of log blocks used for restoring L2ARC contents +(persistent L2ARC). +.It Fl lll Ar device +Display every configuration, unique or not. +If a valid L2ARC header was found +also display the properties of log entries in log blocks used for restoring +L2ARC contents (persistent L2ARC). +.Pp +If the +.Fl q +option is also specified, don't print the labels or the L2ARC header. +.Pp +If the +.Fl u +option is also specified, also display the uberblocks on this device. +Specify multiple times to increase verbosity. +.It Fl L , -disable-leak-tracking +Disable leak detection and the loading of space maps. +By default, +.Nm +verifies that all non-free blocks are referenced, which can be very expensive. +.It Fl m , -metaslabs +Display the offset, spacemap, free space of each metaslab, all the log +spacemaps and their obsolete entry statistics. +.It Fl mm +Also display information about the on-disk free space histogram associated with +each metaslab. +.It Fl mmm +Display the maximum contiguous free space, the in-core free space histogram, and +the percentage of free space in each space map. +.It Fl mmmm +Display every spacemap record. +.It Fl M , -metaslab-groups +Display all "normal" vdev metaslab group information - per-vdev metaslab count, +fragmentation, +and free space histogram, as well as overall pool fragmentation and histogram. +.It Fl MM +"Special" vdevs are added to -M's normal output. +Also display information about the maximum contiguous free space and the +percentage of free space in each space map. +.It Fl MMM +Display every spacemap record. +.It Fl N +Same as +.Fl d +but force zdb to interpret the +.Op Ar dataset Ns | Ns Ar objset-ID +in +.Op Ar poolname Ns Op / Ns Ar dataset Ns | Ns Ar objset-ID +as a numeric objset ID. +.It Fl O , -object-lookups Ns = Ns Ar dataset path +Look up the specified +.Ar path +inside of the +.Ar dataset +and display its metadata and indirect blocks. +Specified +.Ar path +must be relative to the root of +.Ar dataset . +This option can be combined with +.Fl v +for increasing verbosity. +.It Fl r , -copy-object Ns = Ns Ar dataset path destination +Copy the specified +.Ar path +inside of the +.Ar dataset +to the specified destination. +Specified +.Ar path +must be relative to the root of +.Ar dataset . +This option can be combined with +.Fl v +for increasing verbosity. +.It Xo +.Fl R , -read-block Ns = Ns Ar poolname vdev : Ns Ar offset : Ns Oo Ar lsize Ns / Oc Ns Ar psize Ns Op : Ns Ar flags +.Xc +Read and display a block from the specified device. +By default the block is displayed as a hex dump, but see the description of the +.Sy r +flag, below. +.Pp +The block is specified in terms of a colon-separated tuple +.Ar vdev +.Pq an integer vdev identifier +.Ar offset +.Pq the offset within the vdev +.Ar size +.Pq the physical size, or logical size / physical size +of the block to read and, optionally, +.Ar flags +.Pq a set of flags, described below . +.Pp +.Bl -tag -compact -width "b offset" +.It Sy b Ar offset +Print block pointer at hex offset +.It Sy c +Calculate and display checksums +.It Sy d +Decompress the block. +Set environment variable +.Nm ZDB_NO_ZLE +to skip zle when guessing. +.It Sy e +Byte swap the block +.It Sy g +Dump gang block header +.It Sy i +Dump indirect block +.It Sy r +Dump raw uninterpreted block data +.It Sy v +Verbose output for guessing compression algorithm +.El +.It Fl s , -io-stats +Report statistics on +.Nm zdb +I/O. +Display operation counts, bandwidth, and error counts of I/O to the pool from +.Nm . +.It Fl S , -simulate-dedup +Simulate the effects of deduplication, constructing a DDT and then display +that DDT as with +.Fl DD . +.It Fl T , -brt-stats +Display block reference table (BRT) statistics, including the size of uniques +blocks cloned, the space saving as a result of cloning, and the saving ratio. +.It Fl TT +Display the per-vdev BRT statistics, including total references. +.It Fl TTT +Display histograms of per-vdev BRT refcounts. +.It Fl TTTT +Dump the contents of the block reference tables. +.It Fl u , -uberblock +Display the current uberblock. +.El +.Pp +Other options: +.Bl -tag -width Ds +.It Fl A , -ignore-assertions +Do not abort should any assertion fail. +.It Fl AA +Enable panic recovery, certain errors which would otherwise be fatal are +demoted to warnings. +.It Fl AAA +Do not abort if asserts fail and also enable panic recovery. +.It Fl e , -exported Ns = Ns Oo Fl p Ar path Oc Ns … +Operate on an exported pool, not present in +.Pa /etc/zfs/zpool.cache . +The +.Fl p +flag specifies the path under which devices are to be searched. +.It Fl x , -dump-blocks Ns = Ns Ar dumpdir +All blocks accessed will be copied to files in the specified directory. +The blocks will be placed in sparse files whose name is the same as +that of the file or device read. +.Nm +can be then run on the generated files. +Note that the +.Fl bbc +flags are sufficient to access +.Pq and thus copy +all metadata on the pool. +.It Fl F , -automatic-rewind +Attempt to make an unreadable pool readable by trying progressively older +transactions. +.It Fl G , -dump-debug-msg +Dump the contents of the zfs_dbgmsg buffer before exiting +.Nm . +zfs_dbgmsg is a buffer used by ZFS to dump advanced debug information. +.It Fl I , -inflight Ns = Ns Ar inflight-I/O-ops +Limit the number of outstanding checksum I/O operations to the specified value. +The default value is 200. +This option affects the performance of the +.Fl c +option. +.It Fl K , -key Ns = Ns Ar key +Decryption key needed to access an encrypted dataset. +This will cause +.Nm +to attempt to unlock the dataset using the encryption root, key format and other +encryption parameters on the given dataset. +.Nm +can still inspect pool and dataset structures on encrypted datasets without +unlocking them, but will not be able to access file names and attributes and +object contents. \fBWARNING:\fP The raw decryption key and any decrypted data +will be in user memory while +.Nm +is running. +Other user programs may be able to extract it by inspecting +.Nm +as it runs. +Exercise extreme caution when using this option in shared or uncontrolled +environments. +.It Fl o , -option Ns = Ns Ar var Ns = Ns Ar value Ns … +Set the given tunable to the provided value. +.It Fl o , -option Ns = Ns Ar var Ns … +Show the value of the given tunable. +.It Fl o , -option Ns = Ns show +Show all tunables and their values. +.It Fl o , -option Ns = Ns info Ns = Ns Ar value Ns … +Show info about a tunable, including their name, type and description. +.It Fl o , -option Ns = Ns info +Show info about all tunables. +.It Fl P , -parseable +Print numbers in an unscaled form more amenable to parsing, e.g.\& +.Sy 1000000 +rather than +.Sy 1M . +.It Fl t , -txg Ns = Ns Ar transaction +Specify the highest transaction to use when searching for uberblocks. +See also the +.Fl u +and +.Fl l +options for a means to see the available uberblocks and their associated +transaction numbers. +.It Fl U , -cachefile Ns = Ns Ar cachefile +Use a cache file other than +.Pa /etc/zfs/zpool.cache . +.It Fl v , -verbose +Enable verbosity. +Specify multiple times for increased verbosity. +.It Fl V , -verbatim +Attempt verbatim import. +This mimics the behavior of the kernel when loading a pool from a cachefile. +Only usable with +.Fl e . +.It Fl X , -extreme-rewind +Attempt +.Qq extreme +transaction rewind, that is attempt the same recovery as +.Fl F +but read transactions otherwise deemed too old. +.It Fl Y , -all-reconstruction +Attempt all possible combinations when reconstructing indirect split blocks. +This flag disables the individual I/O deadman timer in order to allow as +much time as required for the attempted reconstruction. +.It Fl y , -livelist +Perform validation for livelists that are being deleted. +Scans through the livelist and metaslabs, checking for duplicate entries +and compares the two, checking for potential double frees. +If it encounters issues, warnings will be printed, but the command will not +necessarily fail. +.El +.Pp +Specifying a display option more than once enables verbosity for only that +option, with more occurrences enabling more verbosity. +.Pp +If no options are specified, all information about the named pool will be +displayed at default verbosity. +. +.Sh EXIT STATUS +The +.Nm +utility exits +.Sy 0 +on success, +.Sy 1 +if a fatal error occurs, +.Sy 2 +if invalid command line options were specified, or +.Sy 3 +if on-disk corruption was detected, but was not fatal. +.Sh EXAMPLES +.Ss Example 1 : No Display the configuration of imported pool Ar rpool +.Bd -literal +.No # Nm zdb Fl C Ar rpool +MOS Configuration: + version: 28 + name: 'rpool' + … +.Ed +. +.Ss Example 2 : No Display basic dataset information about Ar rpool +.Bd -literal +.No # Nm zdb Fl d Ar rpool +Dataset mos [META], ID 0, cr_txg 4, 26.9M, 1051 objects +Dataset rpool/swap [ZVOL], ID 59, cr_txg 356, 486M, 2 objects + … +.Ed +. +.Ss Example 3 : No Display basic information about object 0 in Ar rpool/export/home +.Bd -literal +.No # Nm zdb Fl d Ar rpool/export/home 0 +Dataset rpool/export/home [ZPL], ID 137, cr_txg 1546, 32K, 8 objects + + Object lvl iblk dblk dsize lsize %full type + 0 7 16K 16K 15.0K 16K 25.00 DMU dnode +.Ed +. +.Ss Example 4 : No Display the predicted effect of enabling deduplication on Ar rpool +.Bd -literal +.No # Nm zdb Fl S Ar rpool +Simulated DDT histogram: + +bucket allocated referenced +______ ______________________________ ______________________________ +refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE +------ ------ ----- ----- ----- ------ ----- ----- ----- + 1 694K 27.1G 15.0G 15.0G 694K 27.1G 15.0G 15.0G + 2 35.0K 1.33G 699M 699M 74.7K 2.79G 1.45G 1.45G + … +dedup = 1.11, compress = 1.80, copies = 1.00, dedup * compress / copies = 2.00 +.Ed +. +.Sh SEE ALSO +.Xr zfs 8 , +.Xr zpool 8 diff --git a/sys/contrib/openzfs/man/man8/zed.8.in b/sys/contrib/openzfs/man/man8/zed.8.in new file mode 100644 index 000000000000..2d19f2d8496b --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zed.8.in @@ -0,0 +1,306 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" This file is part of the ZFS Event Daemon (ZED). +.\" Developed at Lawrence Livermore National Laboratory (LLNL-CODE-403049). +.\" Copyright (C) 2013-2014 Lawrence Livermore National Security, LLC. +.\" Refer to the OpenZFS git commit log for authoritative copyright attribution. +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License Version 1.0 (CDDL-1.0). +.\" You can obtain a copy of the license from the top-level file +.\" "OPENSOLARIS.LICENSE" or at <http://opensource.org/licenses/CDDL-1.0>. +.\" You may not use this file except in compliance with the license. +.\" +.\" Developed at Lawrence Livermore National Laboratory (LLNL-CODE-403049) +.\" +.Dd August 22, 2022 +.Dt ZED 8 +.Os +. +.Sh NAME +.Nm ZED +.Nd ZFS Event Daemon +.Sh SYNOPSIS +.Nm +.Op Fl fFhILMvVZ +.Op Fl d Ar zedletdir +.Op Fl p Ar pidfile +.Op Fl P Ar path +.Op Fl s Ar statefile +.Op Fl j Ar jobs +.Op Fl b Ar buflen +. +.Sh DESCRIPTION +The +.Nm +(ZFS Event Daemon) monitors events generated by the ZFS kernel +module. +When a zevent (ZFS Event) is posted, the +.Nm +will run any ZEDLETs (ZFS Event Daemon Linkage for Executable Tasks) +that have been enabled for the corresponding zevent class. +. +.Sh OPTIONS +.Bl -tag -width "-h" +.It Fl h +Display a summary of the command-line options. +.It Fl L +Display license information. +.It Fl V +Display version information. +.It Fl v +Be verbose. +.It Fl f +Force the daemon to run if at all possible, disabling security checks and +throwing caution to the wind. +Not recommended for use in production. +.It Fl F +Don't daemonize: remain attached to the controlling terminal, +log to the standard I/O streams. +.It Fl M +Lock all current and future pages in the virtual memory address space. +This may help the daemon remain responsive when the system is under heavy +memory pressure. +.It Fl I +Request that the daemon idle rather than exit when the kernel modules are not +loaded. +Processing of events will start, or resume, when the kernel modules are +(re)loaded. +Under Linux the kernel modules cannot be unloaded while the daemon is running. +.It Fl Z +Zero the daemon's state, thereby allowing zevents still within the kernel +to be reprocessed. +.It Fl d Ar zedletdir +Read the enabled ZEDLETs from the specified directory. +.It Fl p Ar pidfile +Write the daemon's process ID to the specified file. +.It Fl P Ar path +Custom +.Ev $PATH +for zedlets to use. +Normally zedlets run in a locked-down environment, with hardcoded paths to the +ZFS commands +.Pq Ev $ZFS , $ZPOOL , $ZED , … , +and a hard-coded +.Ev $PATH . +This is done for security reasons. +However, the ZFS test suite uses a custom PATH for its ZFS commands, and passes +it to +.Nm +with +.Fl P . +In short, +.Fl P +is only to be used by the ZFS test suite; never use +it in production! +.It Fl s Ar statefile +Write the daemon's state to the specified file. +.It Fl j Ar jobs +Allow at most +.Ar jobs +ZEDLETs to run concurrently, +delaying execution of new ones until they finish. +Defaults to +.Sy 16 . +.It Fl b Ar buflen +Cap kernel event buffer growth to +.Ar buflen +entries. +This buffer is grown when the daemon misses an event, but results in +unreclaimable memory use in the kernel. +A value of +.Sy 0 +removes the cap. +Defaults to +.Sy 1048576 . +.El +.Sh ZEVENTS +A zevent is comprised of a list of nvpairs (name/value pairs). +Each zevent contains an EID (Event IDentifier) that uniquely identifies it +throughout +the lifetime of the loaded ZFS kernel module; this EID is a monotonically +increasing integer that resets to 1 each time the kernel module is loaded. +Each zevent also contains a class string that identifies the type of event. +For brevity, a subclass string is defined that omits the leading components +of the class string. +Additional nvpairs exist to provide event details. +.Pp +The kernel maintains a list of recent zevents that can be viewed (along with +their associated lists of nvpairs) using the +.Nm zpool Cm events Fl v +command. +. +.Sh CONFIGURATION +ZEDLETs to be invoked in response to zevents are located in the +.Em enabled-zedlets +directory +.Pq Ar zedletdir . +These can be symlinked or copied from the +.Em installed-zedlets +directory; symlinks allow for automatic updates +from the installed ZEDLETs, whereas copies preserve local modifications. +As a security measure, since ownership change is a privileged operation, +ZEDLETs must be owned by root. +They must have execute permissions for the user, +but they must not have write permissions for group or other. +Dotfiles are ignored. +.Pp +ZEDLETs are named after the zevent class for which they should be invoked. +In particular, a ZEDLET will be invoked for a given zevent if either its +class or subclass string is a prefix of its filename (and is followed by +a non-alphabetic character). +As a special case, the prefix +.Sy all +matches all zevents. +Multiple ZEDLETs may be invoked for a given zevent. +. +.Sh ZEDLETS +ZEDLETs are executables invoked by the ZED in response to a given zevent. +They should be written under the presumption they can be invoked concurrently, +and they should use appropriate locking to access any shared resources. +The one exception to this are "synchronous zedlets", which are described later +in this page. +Common variables used by ZEDLETs can be stored in the default rc file which +is sourced by scripts; these variables should be prefixed with +.Sy ZED_ . +.Pp +The zevent nvpairs are passed to ZEDLETs as environment variables. +Each nvpair name is converted to an environment variable in the following +manner: +.Bl -enum -compact +.It +it is prefixed with +.Sy ZEVENT_ , +.It +it is converted to uppercase, and +.It +each non-alphanumeric character is converted to an underscore. +.El +.Pp +Some additional environment variables have been defined to present certain +nvpair values in a more convenient form. +An incomplete list of zevent environment variables is as follows: +.Bl -tag -compact -width "ZEVENT_TIME_STRING" +.It Sy ZEVENT_EID +The Event IDentifier. +.It Sy ZEVENT_CLASS +The zevent class string. +.It Sy ZEVENT_SUBCLASS +The zevent subclass string. +.It Sy ZEVENT_TIME +The time at which the zevent was posted as +.Dq Em seconds nanoseconds +since the Epoch. +.It Sy ZEVENT_TIME_SECS +The +.Em seconds +component of +.Sy ZEVENT_TIME . +.It Sy ZEVENT_TIME_NSECS +The +.Em nanoseconds +component of +.Sy ZEVENT_TIME . +.It Sy ZEVENT_TIME_STRING +An almost-RFC3339-compliant string for +.Sy ZEVENT_TIME . +.El +.Pp +Additionally, the following ZED & ZFS variables are defined: +.Bl -tag -compact -width "ZEVENT_TIME_STRING" +.It Sy ZED_PID +The daemon's process ID. +.It Sy ZED_ZEDLET_DIR +The daemon's current +.Em enabled-zedlets +directory. +.It Sy ZFS_ALIAS +The alias +.Pq Dq Em name Ns - Ns Em version Ns - Ns Em release +string of the ZFS distribution the daemon is part of. +.It Sy ZFS_VERSION +The ZFS version the daemon is part of. +.It Sy ZFS_RELEASE +The ZFS release the daemon is part of. +.El +.Pp +ZEDLETs may need to call other ZFS commands. +The installation paths of the following executables are defined as environment +variables: +.Sy ZDB , +.Sy ZED , +.Sy ZFS , +.Sy ZINJECT , +and +.Sy ZPOOL . +These variables may be overridden in the rc file. +. +.Sh Synchronous ZEDLETS +ZED's normal behavior is to spawn off zedlets in parallel and ignore their +completion order. +This means that ZED can potentially +have zedlets for event ID number 2 starting before zedlets for event ID number +1 have finished. +Most of the time this is fine, and it actually helps when the system is getting +hammered with hundreds of events. +.Pp +However, there are times when you want your zedlets to be executed in sequence +with the event ID. +That is where synchronous zedlets come in. +.Pp +ZED will wait for all previously spawned zedlets to finish before running +a synchronous zedlet. +Synchronous zedlets are guaranteed to be the only +zedlet running. +No other zedlets may run in parallel with a synchronous zedlet. +Users should be careful to only use synchronous zedlets when needed, since +they decrease parallelism. +.Pp +To make a zedlet synchronous, simply add a "-sync-" immediately following the +event name in the zedlet's file name: +.Pp +.Sy EVENT_NAME-sync-ZEDLETNAME.sh +.Pp +For example, if you wanted a synchronous statechange script: +.Pp +.Sy statechange-sync-myzedlet.sh +. +.Sh FILES +.Bl -tag -width "-c" +.It Pa @sysconfdir@/zfs/zed.d +The default directory for enabled ZEDLETs. +.It Pa @sysconfdir@/zfs/zed.d/zed.rc +The default rc file for common variables used by ZEDLETs. +.It Pa @zfsexecdir@/zed.d +The default directory for installed ZEDLETs. +.It Pa @runstatedir@/zed.pid +The default file containing the daemon's process ID. +.It Pa @runstatedir@/zed.state +The default file containing the daemon's state. +.El +. +.Sh SIGNALS +.Bl -tag -width "-c" +.It Sy SIGHUP +Reconfigure the daemon and rescan the directory for enabled ZEDLETs. +.It Sy SIGTERM , SIGINT +Terminate the daemon. +.El +. +.Sh SEE ALSO +.Xr zfs 8 , +.Xr zpool 8 , +.Xr zpool-events 8 +. +.Sh NOTES +The +.Nm +requires root privileges. +.Pp +Do not taunt the +.Nm . +. +.Sh BUGS +ZEDLETs are unable to return state/status information to the kernel. +.Pp +Internationalization support via gettext has not been added. diff --git a/sys/contrib/openzfs/man/man8/zfs-allow.8 b/sys/contrib/openzfs/man/man8/zfs-allow.8 new file mode 100644 index 000000000000..e3b0e1ab3e12 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-allow.8 @@ -0,0 +1,495 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd September 8, 2025 +.Dt ZFS-ALLOW 8 +.Os +. +.Sh NAME +.Nm zfs-allow +.Nd delegate ZFS administration permissions to unprivileged users +.Sh SYNOPSIS +.Nm zfs +.Cm allow +.Op Fl dglu +.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns … +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm allow +.Op Fl dl +.Fl e Ns | Ns Sy everyone +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm allow +.Fl c +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm allow +.Fl s No @ Ns Ar setname +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm unallow +.Op Fl dglru +.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns … +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm unallow +.Op Fl dlr +.Fl e Ns | Ns Sy everyone +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm unallow +.Op Fl r +.Fl c +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm unallow +.Op Fl r +.Fl s No @ Ns Ar setname +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm allow +.Ar filesystem Ns | Ns Ar volume +.Xc +Displays permissions that have been delegated on the specified filesystem or +volume. +See the other forms of +.Nm zfs Cm allow +for more information. +.Pp +Delegations are supported under Linux with the exception of +.Sy mount , +.Sy unmount , +.Sy mountpoint , +.Sy canmount , +.Sy rename , +and +.Sy share . +These permissions cannot be delegated because the Linux +.Xr mount 8 +command restricts modifications of the global namespace to the root user. +.It Xo +.Nm zfs +.Cm allow +.Op Fl dglu +.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns … +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm zfs +.Cm allow +.Op Fl dl +.Fl e Ns | Ns Sy everyone +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Xc +Delegates ZFS administration permission for the file systems to non-privileged +users. +.Bl -tag -width "-d" +.It Fl d +Allow only for the descendent file systems. +.It Fl e Ns | Ns Sy everyone +Specifies that the permissions be delegated to everyone. +.It Fl g Ar group Ns Oo , Ns Ar group Oc Ns … +Explicitly specify that permissions are delegated to the group. +.It Fl l +Allow +.Qq locally +only for the specified file system. +.It Fl u Ar user Ns Oo , Ns Ar user Oc Ns … +Explicitly specify that permissions are delegated to the user. +.It Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns … +Specifies to whom the permissions are delegated. +Multiple entities can be specified as a comma-separated list. +If neither of the +.Fl gu +options are specified, then the argument is interpreted preferentially as the +keyword +.Sy everyone , +then as a user name, and lastly as a group name. +To specify a user or group named +.Qq everyone , +use the +.Fl g +or +.Fl u +options. +To specify a group with the same name as a user, use the +.Fl g +options. +.It Xo +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Xc +The permissions to delegate. +Multiple permissions may be specified as a comma-separated list. +Permission names are the same as ZFS subcommand and property names. +See the property list below. +Property set names, which begin with +.Sy @ , +may be specified. +See the +.Fl s +form below for details. +.El +.Pp +If neither of the +.Fl dl +options are specified, or both are, then the permissions are allowed for the +file system or volume, and all of its descendants. +.Pp +Permissions are generally the ability to use a ZFS subcommand or change a ZFS +property. +The following permissions are available: +.TS +l l l . +NAME TYPE NOTES +_ _ _ +allow subcommand Must also have the permission that is being allowed +bookmark subcommand +clone subcommand Must also have the \fBcreate\fR ability and \fBmount\fR ability in the origin file system +create subcommand Must also have the \fBmount\fR ability. Must also have the \fBrefreservation\fR ability to create a non-sparse volume. +destroy subcommand Must also have the \fBmount\fR ability +diff subcommand Allows lookup of paths within a dataset given an object number, and the ability to create snapshots necessary to \fBzfs diff\fR. +hold subcommand Allows adding a user hold to a snapshot +load-key subcommand Allows loading and unloading of encryption key (see \fBzfs load-key\fR and \fBzfs unload-key\fR). +change-key subcommand Allows changing an encryption key via \fBzfs change-key\fR. +mount subcommand Allows mounting/unmounting ZFS datasets +promote subcommand Must also have the \fBmount\fR and \fBpromote\fR ability in the origin file system +receive subcommand Must also have the \fBmount\fR and \fBcreate\fR ability, required for \fBzfs receive -F\fR (see also \fBreceive:append\fR for limited, non forced receive) +release subcommand Allows releasing a user hold which might destroy the snapshot +rename subcommand Must also have the \fBmount\fR and \fBcreate\fR ability in the new parent +rollback subcommand Must also have the \fBmount\fR ability +send subcommand Allows sending a replication stream of a dataset. +send:raw subcommand Only allows sending raw replication streams, preventing encrypted datasets being sent in decrypted form. +share subcommand Allows sharing file systems over NFS or SMB protocols +snapshot subcommand Must also have the \fBmount\fR ability + +receive:append other Must also have the \fBmount\fR and \fBcreate\fR ability, limited receive ability (can not do receive -F) +groupquota other Allows accessing any \fBgroupquota@\fI…\fR property +groupobjquota other Allows accessing any \fBgroupobjquota@\fI…\fR property +groupused other Allows reading any \fBgroupused@\fI…\fR property +groupobjused other Allows reading any \fBgroupobjused@\fI…\fR property +userprop other Allows changing any user property +userquota other Allows accessing any \fBuserquota@\fI…\fR property +userobjquota other Allows accessing any \fBuserobjquota@\fI…\fR property +userused other Allows reading any \fBuserused@\fI…\fR property +userobjused other Allows reading any \fBuserobjused@\fI…\fR property +projectobjquota other Allows accessing any \fBprojectobjquota@\fI…\fR property +projectquota other Allows accessing any \fBprojectquota@\fI…\fR property +projectobjused other Allows reading any \fBprojectobjused@\fI…\fR property +projectused other Allows reading any \fBprojectused@\fI…\fR property + +aclinherit property +aclmode property +acltype property +atime property +canmount property +casesensitivity property +checksum property +compression property +context property +copies property +dedup property +defcontext property +devices property +dnodesize property +encryption property +exec property +filesystem_limit property +fscontext property +keyformat property +keylocation property +logbias property +mlslabel property +mountpoint property +nbmand property +normalization property +overlay property +pbkdf2iters property +primarycache property +quota property +readonly property +recordsize property +redundant_metadata property +refquota property +refreservation property +relatime property +reservation property +rootcontext property +secondarycache property +setuid property +sharenfs property +sharesmb property +snapdev property +snapdir property +snapshot_limit property +special_small_blocks property +sync property +utf8only property +version property +volblocksize property +volmode property +volsize property +vscan property +xattr property +zoned property +.TE +.It Xo +.Nm zfs +.Cm allow +.Fl c +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Xc +Sets +.Qq create time +permissions. +These permissions are granted +.Pq locally +to the creator of any newly-created descendent file system. +.It Xo +.Nm zfs +.Cm allow +.Fl s No @ Ns Ar setname +.Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … +.Ar filesystem Ns | Ns Ar volume +.Xc +Defines or adds permissions to a permission set. +The set can be used by other +.Nm zfs Cm allow +commands for the specified file system and its descendants. +Sets are evaluated dynamically, so changes to a set are immediately reflected. +Permission sets follow the same naming restrictions as ZFS file systems, but the +name must begin with +.Sy @ , +and can be no more than 64 characters long. +.It Xo +.Nm zfs +.Cm unallow +.Op Fl dglru +.Ar user Ns | Ns Ar group Ns Oo , Ns Ar user Ns | Ns Ar group Oc Ns … +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm zfs +.Cm unallow +.Op Fl dlr +.Fl e Ns | Ns Sy everyone +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm zfs +.Cm unallow +.Op Fl r +.Fl c +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +.Xc +Removes permissions that were granted with the +.Nm zfs Cm allow +command. +No permissions are explicitly denied, so other permissions granted are still in +effect. +For example, if the permission is granted by an ancestor. +If no permissions are specified, then all permissions for the specified +.Ar user , +.Ar group , +or +.Sy everyone +are removed. +Specifying +.Sy everyone +.Po or using the +.Fl e +option +.Pc +only removes the permissions that were granted to everyone, not all permissions +for every user and group. +See the +.Nm zfs Cm allow +command for a description of the +.Fl ldugec +options. +.Bl -tag -width "-r" +.It Fl r +Recursively remove the permissions from this file system and all descendants. +.El +.It Xo +.Nm zfs +.Cm unallow +.Op Fl r +.Fl s No @ Ns Ar setname +.Oo Ar perm Ns | Ns @ Ns Ar setname Ns Oo , Ns Ar perm Ns | Ns @ Ns +.Ar setname Oc Ns … Oc +.Ar filesystem Ns | Ns Ar volume +.Xc +Removes permissions from a permission set. +If no permissions are specified, then all permissions are removed, thus removing +the set entirely. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 17, 18, 19, 20 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Delegating ZFS Administration Permissions on a ZFS Dataset +The following example shows how to set permissions so that user +.Ar cindys +can create, destroy, mount, and take snapshots on +.Ar tank/cindys . +The permissions on +.Ar tank/cindys +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Sy cindys create , Ns Sy destroy , Ns Sy mount , Ns Sy snapshot Ar tank/cindys +.No # Nm zfs Cm allow Ar tank/cindys +---- Permissions on tank/cindys -------------------------------------- +Local+Descendent permissions: + user cindys create,destroy,mount,snapshot +.Ed +.Pp +Because the +.Ar tank/cindys +mount point permission is set to 755 by default, user +.Ar cindys +will be unable to mount file systems under +.Ar tank/cindys . +Add an ACE similar to the following syntax to provide mount point access: +.Dl # Cm chmod No A+user : Ns Ar cindys Ns :add_subdirectory:allow Ar /tank/cindys +. +.Ss Example 2 : No Delegating Create Time Permissions on a ZFS Dataset +The following example shows how to grant anyone in the group +.Ar staff +to create file systems in +.Ar tank/users . +This syntax also allows staff members to destroy their own file systems, but not +destroy anyone else's file system. +The permissions on +.Ar tank/users +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Ar staff Sy create , Ns Sy mount Ar tank/users +.No # Nm zfs Cm allow Fl c Sy destroy Ar tank/users +.No # Nm zfs Cm allow Ar tank/users +---- Permissions on tank/users --------------------------------------- +Permission sets: + destroy +Local+Descendent permissions: + group staff create,mount +.Ed +. +.Ss Example 3 : No Defining and Granting a Permission Set on a ZFS Dataset +The following example shows how to define and grant a permission set on the +.Ar tank/users +file system. +The permissions on +.Ar tank/users +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Fl s No @ Ns Ar pset Sy create , Ns Sy destroy , Ns Sy snapshot , Ns Sy mount Ar tank/users +.No # Nm zfs Cm allow staff No @ Ns Ar pset tank/users +.No # Nm zfs Cm allow Ar tank/users +---- Permissions on tank/users --------------------------------------- +Permission sets: + @pset create,destroy,mount,snapshot +Local+Descendent permissions: + group staff @pset +.Ed +. +.Ss Example 4 : No Delegating Property Permissions on a ZFS Dataset +The following example shows to grant the ability to set quotas and reservations +on the +.Ar users/home +file system. +The permissions on +.Ar users/home +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Ar cindys Sy quota , Ns Sy reservation Ar users/home +.No # Nm zfs Cm allow Ar users/home +---- Permissions on users/home --------------------------------------- +Local+Descendent permissions: + user cindys quota,reservation +cindys% zfs set quota=10G users/home/marks +cindys% zfs get quota users/home/marks +NAME PROPERTY VALUE SOURCE +users/home/marks quota 10G local +.Ed +. +.Ss Example 5 : No Removing ZFS Delegated Permissions on a ZFS Dataset +The following example shows how to remove the snapshot permission from the +.Ar staff +group on the +.Sy tank/users +file system. +The permissions on +.Sy tank/users +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm unallow Ar staff Sy snapshot Ar tank/users +.No # Nm zfs Cm allow Ar tank/users +---- Permissions on tank/users --------------------------------------- +Permission sets: + @pset create,destroy,mount,snapshot +Local+Descendent permissions: + group staff @pset +.Ed diff --git a/sys/contrib/openzfs/man/man8/zfs-bookmark.8 b/sys/contrib/openzfs/man/man8/zfs-bookmark.8 new file mode 100644 index 000000000000..5a0933820020 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-bookmark.8 @@ -0,0 +1,76 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" Copyright (c) 2019, 2020 by Christian Schwarz. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZFS-BOOKMARK 8 +.Os +. +.Sh NAME +.Nm zfs-bookmark +.Nd create bookmark of ZFS snapshot +.Sh SYNOPSIS +.Nm zfs +.Cm bookmark +.Ar snapshot Ns | Ns Ar bookmark +.Ar newbookmark +. +.Sh DESCRIPTION +Creates a new bookmark of the given snapshot or bookmark. +Bookmarks mark the point in time when the snapshot was created, and can be used +as the incremental source for a +.Nm zfs Cm send . +.Pp +When creating a bookmark from an existing redaction bookmark, the resulting +bookmark is +.Em not +a redaction bookmark. +.Pp +This feature must be enabled to be used. +See +.Xr zpool-features 7 +for details on ZFS feature flags and the +.Sy bookmarks +feature. +. +.Sh EXAMPLES +.\" These are, respectively, examples 23 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Creating a bookmark +The following example creates a bookmark to a snapshot. +This bookmark can then be used instead of a snapshot in send streams. +.Dl # Nm zfs Cm bookmark Ar rpool Ns @ Ns Ar snapshot rpool Ns # Ns Ar bookmark +. +.Sh SEE ALSO +.Xr zfs-destroy 8 , +.Xr zfs-send 8 , +.Xr zfs-snapshot 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-change-key.8 b/sys/contrib/openzfs/man/man8/zfs-change-key.8 new file mode 120000 index 000000000000..d027a419d1e4 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-change-key.8 @@ -0,0 +1 @@ +zfs-load-key.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-clone.8 b/sys/contrib/openzfs/man/man8/zfs-clone.8 new file mode 100644 index 000000000000..9609cf2ce36a --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-clone.8 @@ -0,0 +1,97 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-CLONE 8 +.Os +. +.Sh NAME +.Nm zfs-clone +.Nd clone snapshot of ZFS dataset +.Sh SYNOPSIS +.Nm zfs +.Cm clone +.Op Fl p +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Ar snapshot Ar filesystem Ns | Ns Ar volume +. +.Sh DESCRIPTION +See the +.Sx Clones +section of +.Xr zfsconcepts 7 +for details. +The target dataset can be located anywhere in the ZFS hierarchy, +and is created as the same type as the original. +.Bl -tag -width Ds +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property; see +.Nm zfs Cm create +for details. +.It Fl p +Creates all the non-existing parent datasets. +Datasets created in this manner are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. +If the target filesystem or volume already exists, the operation completes +successfully. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 9, 10 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Creating a ZFS Clone +The following command creates a writable file system whose initial contents are +the same as +.Ar pool/home/bob@yesterday . +.Dl # Nm zfs Cm clone Ar pool/home/bob@yesterday pool/clone +. +.Ss Example 2 : No Promoting a ZFS Clone +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm create Ar pool/project/production + populate /pool/project/production with data +.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today +.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta + make changes to /pool/project/beta and test them +.No # Nm zfs Cm promote Ar pool/project/beta +.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy +.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production + once the legacy version is no longer needed, it can be destroyed +.No # Nm zfs Cm destroy Ar pool/project/legacy +.Ed +. +.Sh SEE ALSO +.Xr zfs-promote 8 , +.Xr zfs-snapshot 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-create.8 b/sys/contrib/openzfs/man/man8/zfs-create.8 new file mode 100644 index 000000000000..58bde5799240 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-create.8 @@ -0,0 +1,280 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd June 2, 2023 +.Dt ZFS-CREATE 8 +.Os +. +.Sh NAME +.Nm zfs-create +.Nd create ZFS dataset +.Sh SYNOPSIS +.Nm zfs +.Cm create +.Op Fl Pnpuv +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Ar filesystem +.Nm zfs +.Cm create +.Op Fl ps +.Op Fl b Ar blocksize +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Fl V Ar size Ar volume +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm create +.Op Fl Pnpuv +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Ar filesystem +.Xc +Creates a new ZFS file system. +The file system is automatically mounted according to the +.Sy mountpoint +property inherited from the parent, unless the +.Fl u +option is used. +.Bl -tag -width "-o" +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property as if the command +.Nm zfs Cm set Ar property Ns = Ns Ar value +was invoked at the same time the dataset was created. +Any editable ZFS property can also be set at creation time. +Multiple +.Fl o +options can be specified. +An error results if the same property is specified in multiple +.Fl o +options. +.It Fl p +Creates all the non-existing parent datasets. +Datasets created in this manner are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. +Any property specified on the command line using the +.Fl o +option is ignored. +If the target filesystem already exists, the operation completes successfully. +.It Fl n +Do a dry-run +.Pq Qq No-op +creation. +No datasets will be created. +This is useful in conjunction with the +.Fl v +or +.Fl P +flags to validate properties that are passed via +.Fl o +options and those implied by other options. +The actual dataset creation can still fail due to insufficient privileges or +available capacity. +.It Fl P +Print machine-parsable verbose information about the created dataset. +Each line of output contains a key and one or two values, all separated by tabs. +The +.Sy create_ancestors +and +.Sy create +keys have +.Em filesystem +as their only value. +The +.Sy create_ancestors +key only appears if the +.Fl p +option is used. +The +.Sy property +key has two values, a property name that property's value. +The +.Sy property +key may appear zero or more times, once for each property that will be set local +to +.Em filesystem +due to the use of the +.Fl o +option. +.It Fl u +Do not mount the newly created file system. +.It Fl v +Print verbose information about the created dataset. +.El +.It Xo +.Nm zfs +.Cm create +.Op Fl ps +.Op Fl b Ar blocksize +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Fl V Ar size Ar volume +.Xc +Creates a volume of the given size. +The volume is exported as a block device in +.Pa /dev/zvol/path , +where +.Em path +is the name of the volume in the ZFS namespace. +The size represents the logical size as exported by the device. +By default, a reservation of equal size is created. +.Pp +.Ar size +is automatically rounded up to the nearest multiple of the +.Sy blocksize . +.Bl -tag -width "-b" +.It Fl b Ar blocksize +Equivalent to +.Fl o Sy volblocksize Ns = Ns Ar blocksize . +If this option is specified in conjunction with +.Fl o Sy volblocksize , +the resulting behavior is undefined. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property as if the +.Nm zfs Cm set Ar property Ns = Ns Ar value +command was invoked at the same time the dataset was created. +Any editable ZFS property can also be set at creation time. +Multiple +.Fl o +options can be specified. +An error results if the same property is specified in multiple +.Fl o +options. +.It Fl p +Creates all the non-existing parent datasets. +Datasets created in this manner are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. +Any property specified on the command line using the +.Fl o +option is ignored. +If the target filesystem already exists, the operation completes successfully. +.It Fl s +Creates a sparse volume with no reservation. +See +.Sy volsize +in the +.Em Native Properties +section of +.Xr zfsprops 7 +for more information about sparse volumes. +.It Fl n +Do a dry-run +.Pq Qq No-op +creation. +No datasets will be created. +This is useful in conjunction with the +.Fl v +or +.Fl P +flags to validate properties that are passed via +.Fl o +options and those implied by other options. +The actual dataset creation can still fail due to insufficient privileges or +available capacity. +.It Fl P +Print machine-parsable verbose information about the created dataset. +Each line of output contains a key and one or two values, all separated by tabs. +The +.Sy create_ancestors +and +.Sy create +keys have +.Em volume +as their only value. +The +.Sy create_ancestors +key only appears if the +.Fl p +option is used. +The +.Sy property +key has two values, a property name that property's value. +The +.Sy property +key may appear zero or more times, once for each property that will be set local +to +.Em volume +due to the use of the +.Fl b +or +.Fl o +options, as well as +.Sy refreservation +if the volume is not sparse. +.It Fl v +Print verbose information about the created dataset. +.El +.El +.Ss ZFS for Swap +Swapping to a ZFS volume is prone to deadlock and not recommended. +See OpenZFS FAQ. +.Pp +Swapping to a file on a ZFS filesystem is not supported. +. +.Sh EXAMPLES +.\" These are, respectively, examples 1, 10 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Creating a ZFS File System Hierarchy +The following commands create a file system named +.Ar pool/home +and a file system named +.Ar pool/home/bob . +The mount point +.Pa /export/home +is set for the parent file system, and is automatically inherited by the child +file system. +.Dl # Nm zfs Cm create Ar pool/home +.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home +.Dl # Nm zfs Cm create Ar pool/home/bob +. +.Ss Example 2 : No Promoting a ZFS Clone +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm create Ar pool/project/production + populate /pool/project/production with data +.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today +.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta + make changes to /pool/project/beta and test them +.No # Nm zfs Cm promote Ar pool/project/beta +.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy +.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production + once the legacy version is no longer needed, it can be destroyed +.No # Nm zfs Cm destroy Ar pool/project/legacy +.Ed +. +.Sh SEE ALSO +.Xr zfs-destroy 8 , +.Xr zfs-list 8 , +.Xr zpool-create 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-destroy.8 b/sys/contrib/openzfs/man/man8/zfs-destroy.8 new file mode 100644 index 000000000000..6a6791f7a44e --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-destroy.8 @@ -0,0 +1,236 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd February 5, 2025 +.Dt ZFS-DESTROY 8 +.Os +. +.Sh NAME +.Nm zfs-destroy +.Nd destroy ZFS dataset, snapshots, or bookmark +.Sh SYNOPSIS +.Nm zfs +.Cm destroy +.Op Fl Rfnprv +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm destroy +.Op Fl Rdnprv +.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns +.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns … +.Nm zfs +.Cm destroy +.Ar filesystem Ns | Ns Ar volume Ns # Ns Ar bookmark +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm destroy +.Op Fl Rfnprv +.Ar filesystem Ns | Ns Ar volume +.Xc +Destroys the given dataset. +By default, the command unshares any file systems that are currently shared, +unmounts any file systems that are currently mounted, and refuses to destroy a +dataset that has active dependents +.Pq children or clones . +.Bl -tag -width "-R" +.It Fl R +Recursively destroy all dependents, including cloned file systems outside the +target hierarchy. +.It Fl f +Forcibly unmount file systems. +This option has no effect on non-file systems or unmounted file systems. +.It Fl n +Do a dry-run +.Pq Qq No-op +deletion. +No data will be deleted. +This is useful in conjunction with the +.Fl v +or +.Fl p +flags to determine what data would be deleted. +.It Fl p +Print machine-parsable verbose information about the deleted data. +.It Fl r +Recursively destroy all children. +.It Fl v +Print verbose information about the deleted data. +.El +.Pp +Extreme care should be taken when applying either the +.Fl r +or the +.Fl R +options, as they can destroy large portions of a pool and cause unexpected +behavior for mounted file systems in use. +.It Xo +.Nm zfs +.Cm destroy +.Op Fl Rdnprv +.Ar filesystem Ns | Ns Ar volume Ns @ Ns Ar snap Ns +.Oo % Ns Ar snap Ns Oo , Ns Ar snap Ns Oo % Ns Ar snap Oc Oc Oc Ns … +.Xc +Attempts to destroy the given snapshot(s). +This will fail if any clones of the snapshot exist or if the snapshot is held. +In this case, by default, +.Nm zfs Cm destroy +will have no effect and exit in error. +If the +.Fl d +option is applied, the command will instead mark the given snapshot for +automatic destruction as soon as it becomes eligible. +While marked for destruction, a snapshot remains visible, and the user may +create new clones from it and place new holds on it. +.Pp +The read-only snapshot properties +.Sy defer_destroy +and +.Sy userrefs +are used by +.Nm zfs Cm destroy +to determine eligibility and marked status. +.Pp +An inclusive range of snapshots may be specified by separating the first and +last snapshots with a percent sign. +The first and/or last snapshots may be left blank, in which case the +filesystem's oldest or newest snapshot will be implied. +.Pp +Multiple snapshots +.Pq or ranges of snapshots +of the same filesystem or volume may be specified in a comma-separated list of +snapshots. +Only the snapshot's short name +.Po the part after the +.Sy @ +.Pc +should be specified when using a range or comma-separated list to identify +multiple snapshots. +.Bl -tag -width "-R" +.It Fl R +Recursively destroy all clones of these snapshots, including the clones, +snapshots, and children. +If this flag is specified, the +.Fl d +flag will have no effect. +.It Fl d +Rather than returning error if the given snapshot is ineligible for immediate +destruction, mark it for deferred, automatic destruction once it becomes +eligible. +.It Fl n +Do a dry-run +.Pq Qq No-op +deletion. +No data will be deleted. +This is useful in conjunction with the +.Fl p +or +.Fl v +flags to determine what data would be deleted. +.It Fl p +Print machine-parsable verbose information about the deleted data. +.It Fl r +Destroy +.Pq or mark for deferred deletion +all snapshots with this name in descendent file systems. +.It Fl v +Print verbose information about the deleted data. +.El +.Pp +Extreme care should be taken when applying either the +.Fl r +or the +.Fl R +options, as they can destroy large portions of a pool and cause unexpected +behavior for mounted file systems in use. +.It Xo +.Nm zfs +.Cm destroy +.Ar filesystem Ns | Ns Ar volume Ns # Ns Ar bookmark +.Xc +The given bookmark is destroyed. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 3, 10, 15 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Creating and Destroying Multiple Snapshots +The following command creates snapshots named +.Ar yesterday No of Ar pool/home +and all of its descendent file systems. +Each snapshot is mounted on demand in the +.Pa .zfs/snapshot +directory at the root of its file system. +The second command destroys the newly created snapshots. +.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday +.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday +. +.Ss Example 2 : No Promoting a ZFS Clone +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm create Ar pool/project/production + populate /pool/project/production with data +.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today +.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta + make changes to /pool/project/beta and test them +.No # Nm zfs Cm promote Ar pool/project/beta +.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy +.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production + once the legacy version is no longer needed, it can be destroyed +.No # Nm zfs Cm destroy Ar pool/project/legacy +.Ed +. +.Ss Example 3 : No Performing a Rolling Snapshot +The following example shows how to maintain a history of snapshots with a +consistent naming scheme. +To keep a week's worth of snapshots, the user destroys the oldest snapshot, +renames the remaining snapshots, and then creates a new snapshot, as follows: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday +.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today +.Ed +. +.Sh SEE ALSO +.Xr zfs-create 8 , +.Xr zfs-hold 8 , +.Xr zfsprops 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-diff.8 b/sys/contrib/openzfs/man/man8/zfs-diff.8 new file mode 100644 index 000000000000..5b94ea524666 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-diff.8 @@ -0,0 +1,122 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-DIFF 8 +.Os +. +.Sh NAME +.Nm zfs-diff +.Nd show difference between ZFS snapshots +.Sh SYNOPSIS +.Nm zfs +.Cm diff +.Op Fl FHth +.Ar snapshot Ar snapshot Ns | Ns Ar filesystem +. +.Sh DESCRIPTION +Display the difference between a snapshot of a given filesystem and another +snapshot of that filesystem from a later time or the current contents of the +filesystem. +The first column is a character indicating the type of change, the other columns +indicate pathname, new pathname +.Pq in case of rename , +change in link count, and optionally file type and/or change time. +The types of change are: +.Bl -tag -compact -offset Ds -width "M" +.It Sy - +The path has been removed +.It Sy + +The path has been created +.It Sy M +The path has been modified +.It Sy R +The path has been renamed +.El +.Bl -tag -width "-F" +.It Fl F +Display an indication of the type of file, in a manner similar to the +.Fl F +option of +.Xr ls 1 . +.Bl -tag -compact -offset 2n -width "B" +.It Sy B +Block device +.It Sy C +Character device +.It Sy / +Directory +.It Sy > +Door +.It Sy |\& +Named pipe +.It Sy @ +Symbolic link +.It Sy P +Event port +.It Sy = +Socket +.It Sy F +Regular file +.El +.It Fl H +Give more parsable tab-separated output, without header lines and without +arrows. +.It Fl t +Display the path's inode change time as the first column of output. +.It Fl h +Do not +.Sy \e0 Ns Ar ooo Ns -escape +non-ASCII paths. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 22 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Showing the differences between a snapshot and a ZFS Dataset +The following example shows how to see what has changed between a prior +snapshot of a ZFS dataset and its current state. +The +.Fl F +option is used to indicate type information for the files affected. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm diff Fl F Ar tank/test@before tank/test +M / /tank/test/ +M F /tank/test/linked (+1) +R F /tank/test/oldname -> /tank/test/newname +- F /tank/test/deleted ++ F /tank/test/created +M F /tank/test/modified +.Ed +. +.Sh SEE ALSO +.Xr zfs-snapshot 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-get.8 b/sys/contrib/openzfs/man/man8/zfs-get.8 new file mode 120000 index 000000000000..c70b41ae4064 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-get.8 @@ -0,0 +1 @@ +zfs-set.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-groupspace.8 b/sys/contrib/openzfs/man/man8/zfs-groupspace.8 new file mode 120000 index 000000000000..8bc2f1df305e --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-groupspace.8 @@ -0,0 +1 @@ +zfs-userspace.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-hold.8 b/sys/contrib/openzfs/man/man8/zfs-hold.8 new file mode 100644 index 000000000000..a877e428f88b --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-hold.8 @@ -0,0 +1,115 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd November 8, 2022 +.Dt ZFS-HOLD 8 +.Os +. +.Sh NAME +.Nm zfs-hold +.Nd hold ZFS snapshots to prevent their removal +.Sh SYNOPSIS +.Nm zfs +.Cm hold +.Op Fl r +.Ar tag Ar snapshot Ns … +.Nm zfs +.Cm holds +.Op Fl rHp +.Ar snapshot Ns … +.Nm zfs +.Cm release +.Op Fl r +.Ar tag Ar snapshot Ns … +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm hold +.Op Fl r +.Ar tag Ar snapshot Ns … +.Xc +Adds a single reference, named with the +.Ar tag +argument, to the specified snapshots. +Each snapshot has its own tag namespace, and tags must be unique within that +space. +.Pp +If a hold exists on a snapshot, attempts to destroy that snapshot by using the +.Nm zfs Cm destroy +command return +.Sy EBUSY . +.Bl -tag -width "-r" +.It Fl r +Specifies that a hold with the given tag is applied recursively to the snapshots +of all descendent file systems. +.El +.It Xo +.Nm zfs +.Cm holds +.Op Fl rHp +.Ar snapshot Ns … +.Xc +Lists all existing user references for the given snapshot or snapshots. +.Bl -tag -width "-r" +.It Fl r +Lists the holds that are set on the named descendent snapshots, in addition to +listing the holds on the named snapshot. +.It Fl H +Do not print headers, use tab-delimited output. +.It Fl p +Prints holds timestamps as Unix epoch timestamps. +.El +.It Xo +.Nm zfs +.Cm release +.Op Fl r +.Ar tag Ar snapshot Ns … +.Xc +Removes a single reference, named with the +.Ar tag +argument, from the specified snapshot or snapshots. +The tag must already exist for each snapshot. +If a hold exists on a snapshot, attempts to destroy that snapshot by using the +.Nm zfs Cm destroy +command return +.Sy EBUSY . +.Bl -tag -width "-r" +.It Fl r +Recursively releases a hold with the given tag on the snapshots of all +descendent file systems. +.El +.El +. +.Sh SEE ALSO +.Xr zfs-destroy 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-inherit.8 b/sys/contrib/openzfs/man/man8/zfs-inherit.8 new file mode 120000 index 000000000000..c70b41ae4064 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-inherit.8 @@ -0,0 +1 @@ +zfs-set.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-jail.8 b/sys/contrib/openzfs/man/man8/zfs-jail.8 new file mode 100644 index 000000000000..569f5f57eab4 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-jail.8 @@ -0,0 +1,125 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2011, Pawel Jakub Dawidek <pjd@FreeBSD.org> +.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org> +.\" Copyright (c) 2012, Bryan Drewery <bdrewery@FreeBSD.org> +.\" Copyright (c) 2013, Steven Hartland <smh@FreeBSD.org> +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright (c) 2014, Xin LI <delphij@FreeBSD.org> +.\" Copyright (c) 2014-2015, The FreeBSD Foundation, All Rights Reserved. +.\" Copyright (c) 2016 Nexenta Systems, Inc. All Rights Reserved. +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-JAIL 8 +.Os +. +.Sh NAME +.Nm zfs-jail +.Nd attach or detach ZFS filesystem from FreeBSD jail +.Sh SYNOPSIS +.Nm zfs Cm jail +.Ar jailid Ns | Ns Ar jailname +.Ar filesystem +.Nm zfs Cm unjail +.Ar jailid Ns | Ns Ar jailname +.Ar filesystem +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm jail +.Ar jailid Ns | Ns Ar jailname +.Ar filesystem +.Xc +Attach the specified +.Ar filesystem +to the jail identified by JID +.Ar jailid +or name +.Ar jailname . +From now on this file system tree can be managed from within a jail if the +.Sy jailed +property has been set. +To use this functionality, the jail needs the +.Sy allow.mount +and +.Sy allow.mount.zfs +parameters set to +.Sy 1 +and the +.Sy enforce_statfs +parameter set to a value lower than +.Sy 2 . +.Pp +You cannot attach a jailed dataset's children to another jail. +You can also not attach the root file system +of the jail or any dataset which needs to be mounted before the zfs rc script +is run inside the jail, as it would be attached unmounted until it is +mounted from the rc script inside the jail. +.Pp +To allow management of the dataset from within a jail, the +.Sy jailed +property has to be set and the jail needs access to the +.Pa /dev/zfs +device. +The +.Sy quota +property cannot be changed from within a jail. +.Pp +After a dataset is attached to a jail and the +.Sy jailed +property is set, a jailed file system cannot be mounted outside the jail, +since the jail administrator might have set the mount point to an unacceptable +value. +.Pp +See +.Xr jail 8 +for more information on managing jails. +Jails are a +.Fx +feature and are not relevant on other platforms. +.It Xo +.Nm zfs +.Cm unjail +.Ar jailid Ns | Ns Ar jailname +.Ar filesystem +.Xc +Detaches the specified +.Ar filesystem +from the jail identified by JID +.Ar jailid +or name +.Ar jailname . +.El +.Sh SEE ALSO +.Xr zfsprops 7 , +.Xr jail 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-list.8 b/sys/contrib/openzfs/man/man8/zfs-list.8 new file mode 100644 index 000000000000..42eff94f9762 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-list.8 @@ -0,0 +1,363 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd August 25, 2025 +.Dt ZFS-LIST 8 +.Os +. +.Sh NAME +.Nm zfs-list +.Nd list properties of ZFS datasets +.Sh SYNOPSIS +.Nm zfs +.Cm list +.Op Fl r Ns | Ns Fl d Ar depth +.Op Fl Hp +.Op Fl j Op Ar --json-int +.Oo Fl o Ar property Ns Oo , Ns Ar property Oc Ns … Oc +.Oo Fl s Ar property Oc Ns … +.Oo Fl S Ar property Oc Ns … +.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc +.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Oc Ns … +. +.Sh DESCRIPTION +By default, all file systems and volumes are displayed, with the following +fields: +.Sy name , Sy used , Sy available , Sy referenced , Sy mountpoint . +Snapshots are displayed if the +.Sy listsnapshots +pool property is +.Sy on +.Po the default is +.Sy off +.Pc +or if the +.Fl t Sy snapshot +or +.Fl t Sy all +options are specified. +.Bl -tag -width "-H" +.It Fl H +Used for scripting mode. +Do not print headers, and separate fields by a single tab instead of arbitrary +white space. +.It Fl j , -json Op Ar --json-int +Print the output in JSON format. +Specify +.Sy --json-int +to print the numbers in integer format instead of strings in JSON output. +.It Fl d Ar depth +Recursively display any children of the dataset, limiting the recursion to +.Ar depth . +A +.Ar depth +of +.Sy 1 +will display only the dataset and its direct children. +.It Fl o Ar property +A comma-separated list of properties to display. +Each property must be: +.Bl -bullet -compact +.It +One of the properties described in the +.Sx Native Properties +section of +.Xr zfsprops 7 +.It +A user property +.It +The value +.Sy name +to display the dataset name +.It +The value +.Sy space +to display space usage properties on file systems and volumes. +This is a shortcut for specifying +.Fl o Ns \ \& Ns Sy name , Ns Sy avail , Ns Sy used , Ns Sy usedsnap , Ns +.Sy usedds , Ns Sy usedrefreserv , Ns Sy usedchild +.Fl t Sy filesystem , Ns Sy volume . +.El +.It Fl p +Display numbers in parsable +.Pq exact +values. +.It Fl r +Recursively display any children of the dataset on the command line. +.It Fl s Ar property +A property for sorting the output by column in ascending order based on the +value of the property. +The property must be one of the properties described in the +.Sx Properties +section of +.Xr zfsprops 7 +or the value +.Sy name +to sort by the dataset name. +Multiple properties can be specified to operate together using multiple +.Fl s +or +.Fl S +options. +Multiple +.Fl s +and +.Fl S +options are evaluated from left to right to supply sort keys in +decreasing order of priority. +Property types operate as follows: +.Bl -bullet -compact +.It +Numeric types sort in numeric order. +.It +String types sort in alphabetical order. +.It +Types inappropriate for a row sort that row to the literal bottom, +regardless of the specified ordering. +.El +.Pp +If no sort columns are specified, or if two lines of output would sort +equally across all specified columns, then datasets and bookmarks are +sorted by name, whereas snapshots are sorted first by the name of their +dataset and then by the time of their creation. +When no sort columns are specified but snapshots are listed, this +default behavior causes snapshots to be grouped under their datasets in +chronological order by creation time. +.It Fl S Ar property +Same as +.Fl s , +but sorts by +.Ar property +in descending order. +.It Fl t Ar type +A comma-separated list of types to display, where +.Ar type +is one of +.Sy filesystem , +.Sy snapshot , +.Sy volume , +.Sy bookmark , +or +.Sy all . +For example, specifying +.Fl t Sy snapshot +displays only snapshots. +.Sy fs , +.Sy snap , +or +.Sy vol +can be used as aliases for +.Sy filesystem , +.Sy snapshot , +or +.Sy volume . +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 5 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Listing ZFS Datasets +The following command lists all active file systems and volumes in the system. +Snapshots are displayed if +.Sy listsnaps Ns = Ns Sy on . +The default is +.Sy off . +See +.Xr zpoolprops 7 +for more information on pool properties. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm list +NAME USED AVAIL REFER MOUNTPOINT +pool 450K 457G 18K /pool +pool/home 315K 457G 21K /export/home +pool/home/anne 18K 457G 18K /export/home/anne +pool/home/bob 276K 457G 276K /export/home/bob +.Ed +.Ss Example 2 : No Listing ZFS filesystems and snapshots in JSON format +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm list Fl j Fl t Ar filesystem,snapshot | Cm jq +{ + "output_version": { + "command": "zfs list", + "vers_major": 0, + "vers_minor": 1 + }, + "datasets": { + "pool": { + "name": "pool", + "type": "FILESYSTEM", + "pool": "pool", + "properties": { + "used": { + "value": "290K", + "source": { + "type": "NONE", + "data": "-" + } + }, + "available": { + "value": "30.5G", + "source": { + "type": "NONE", + "data": "-" + } + }, + "referenced": { + "value": "24K", + "source": { + "type": "NONE", + "data": "-" + } + }, + "mountpoint": { + "value": "/pool", + "source": { + "type": "DEFAULT", + "data": "-" + } + } + } + }, + "pool/home": { + "name": "pool/home", + "type": "FILESYSTEM", + "pool": "pool", + "properties": { + "used": { + "value": "48K", + "source": { + "type": "NONE", + "data": "-" + } + }, + "available": { + "value": "30.5G", + "source": { + "type": "NONE", + "data": "-" + } + }, + "referenced": { + "value": "24K", + "source": { + "type": "NONE", + "data": "-" + } + }, + "mountpoint": { + "value": "/mnt/home", + "source": { + "type": "LOCAL", + "data": "-" + } + } + } + }, + "pool/home/bob": { + "name": "pool/home/bob", + "type": "FILESYSTEM", + "pool": "pool", + "properties": { + "used": { + "value": "24K", + "source": { + "type": "NONE", + "data": "-" + } + }, + "available": { + "value": "30.5G", + "source": { + "type": "NONE", + "data": "-" + } + }, + "referenced": { + "value": "24K", + "source": { + "type": "NONE", + "data": "-" + } + }, + "mountpoint": { + "value": "/mnt/home/bob", + "source": { + "type": "INHERITED", + "data": "pool/home" + } + } + } + }, + "pool/home/bob@v1": { + "name": "pool/home/bob@v1", + "type": "SNAPSHOT", + "pool": "pool", + "dataset": "pool/home/bob", + "snapshot_name": "v1", + "properties": { + "used": { + "value": "0B", + "source": { + "type": "NONE", + "data": "-" + } + }, + "available": { + "value": "-", + "source": { + "type": "NONE", + "data": "-" + } + }, + "referenced": { + "value": "24K", + "source": { + "type": "NONE", + "data": "-" + } + }, + "mountpoint": { + "value": "-", + "source": { + "type": "NONE", + "data": "-" + } + } + } + } + } +} +.Ed +. +.Sh SEE ALSO +.Xr zfsprops 7 , +.Xr zfs-get 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-load-key.8 b/sys/contrib/openzfs/man/man8/zfs-load-key.8 new file mode 100644 index 000000000000..3a11cea99fd6 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-load-key.8 @@ -0,0 +1,305 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-LOAD-KEY 8 +.Os +. +.Sh NAME +.Nm zfs-load-key +.Nd load, unload, or change encryption key of ZFS dataset +.Sh SYNOPSIS +.Nm zfs +.Cm load-key +.Op Fl nr +.Op Fl L Ar keylocation +.Fl a Ns | Ns Ar filesystem +.Nm zfs +.Cm unload-key +.Op Fl r +.Fl a Ns | Ns Ar filesystem +.Nm zfs +.Cm change-key +.Op Fl l +.Op Fl o Ar keylocation Ns = Ns Ar value +.Op Fl o Ar keyformat Ns = Ns Ar value +.Op Fl o Ar pbkdf2iters Ns = Ns Ar value +.Ar filesystem +.Nm zfs +.Cm change-key +.Fl i +.Op Fl l +.Ar filesystem +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm load-key +.Op Fl nr +.Op Fl L Ar keylocation +.Fl a Ns | Ns Ar filesystem +.Xc +Load the key for +.Ar filesystem , +allowing it and all children that inherit the +.Sy keylocation +property to be accessed. +The key will be expected in the format specified by the +.Sy keyformat +and location specified by the +.Sy keylocation +property. +Note that if the +.Sy keylocation +is set to +.Sy prompt +the terminal will interactively wait for the key to be entered. +Loading a key will not automatically mount the dataset. +If that functionality is desired, +.Nm zfs Cm mount Fl l +will ask for the key and mount the dataset +.Po +see +.Xr zfs-mount 8 +.Pc . +Once the key is loaded the +.Sy keystatus +property will become +.Sy available . +.Bl -tag -width "-r" +.It Fl r +Recursively loads the keys for the specified filesystem and all descendent +encryption roots. +.It Fl a +Loads the keys for all encryption roots in all imported pools. +.It Fl n +Do a dry-run +.Pq Qq No-op +.Cm load-key . +This will cause +.Nm zfs +to simply check that the provided key is correct. +This command may be run even if the key is already loaded. +.It Fl L Ar keylocation +Use +.Ar keylocation +instead of the +.Sy keylocation +property. +This will not change the value of the property on the dataset. +Note that if used with either +.Fl r +or +.Fl a , +.Ar keylocation +may only be given as +.Sy prompt . +.El +.It Xo +.Nm zfs +.Cm unload-key +.Op Fl r +.Fl a Ns | Ns Ar filesystem +.Xc +Unloads a key from ZFS, removing the ability to access the dataset and all of +its children that inherit the +.Sy keylocation +property. +This requires that the dataset is not currently open or mounted. +Once the key is unloaded the +.Sy keystatus +property will become +.Sy unavailable . +.Bl -tag -width "-r" +.It Fl r +Recursively unloads the keys for the specified filesystem and all descendent +encryption roots. +.It Fl a +Unloads the keys for all encryption roots in all imported pools. +.El +.It Xo +.Nm zfs +.Cm change-key +.Op Fl l +.Op Fl o Ar keylocation Ns = Ns Ar value +.Op Fl o Ar keyformat Ns = Ns Ar value +.Op Fl o Ar pbkdf2iters Ns = Ns Ar value +.Ar filesystem +.Xc +.It Xo +.Nm zfs +.Cm change-key +.Fl i +.Op Fl l +.Ar filesystem +.Xc +Changes the user's key (e.g. a passphrase) used to access a dataset. +This command requires that the existing key for the dataset is already loaded. +This command may also be used to change the +.Sy keylocation , +.Sy keyformat , +and +.Sy pbkdf2iters +properties as needed. +If the dataset was not previously an encryption root it will become one. +Alternatively, the +.Fl i +flag may be provided to cause an encryption root to inherit the parent's key +instead. +.Pp +If the user's key is compromised, +.Nm zfs Cm change-key +does not necessarily protect existing or newly-written data from attack. +Newly-written data will continue to be encrypted with the same master key as +the existing data. +The master key is compromised if an attacker obtains a +user key and the corresponding wrapped master key. +Currently, +.Nm zfs Cm change-key +does not overwrite the previous wrapped master key on disk, so it is +accessible via forensic analysis for an indeterminate length of time. +.Pp +In the event of a master key compromise, ideally the drives should be securely +erased to remove all the old data (which is readable using the compromised +master key), a new pool created, and the data copied back. +This can be approximated in place by creating new datasets, copying the data +.Pq e.g. using Nm zfs Cm send | Nm zfs Cm recv , +and then clearing the free space with +.Nm zpool Cm trim Fl -secure +if supported by your hardware, otherwise +.Nm zpool Cm initialize . +.Bl -tag -width "-r" +.It Fl l +Ensures the key is loaded before attempting to change the key. +This is effectively equivalent to running +.Nm zfs Cm load-key Ar filesystem ; Nm zfs Cm change-key Ar filesystem +.It Fl o Ar property Ns = Ns Ar value +Allows the user to set encryption key properties +.Pq Sy keyformat , keylocation , No and Sy pbkdf2iters +while changing the key. +This is the only way to alter +.Sy keyformat +and +.Sy pbkdf2iters +after the dataset has been created. +.It Fl i +Indicates that zfs should make +.Ar filesystem +inherit the key of its parent. +Note that this command can only be run on an encryption root +that has an encrypted parent. +.El +.El +.Ss Encryption +Enabling the +.Sy encryption +feature allows for the creation of encrypted filesystems and volumes. +ZFS will encrypt file and volume data, file attributes, ACLs, permission bits, +directory listings, FUID mappings, and +.Sy userused Ns / Ns Sy groupused +data. +ZFS will not encrypt metadata related to the pool structure, including +dataset and snapshot names, dataset hierarchy, properties, file size, file +holes, and deduplication tables (though the deduplicated data itself is +encrypted). +.Pp +Key rotation is managed by ZFS. +Changing the user's key (e.g. a passphrase) +does not require re-encrypting the entire dataset. +Datasets can be scrubbed, +resilvered, renamed, and deleted without the encryption keys being loaded (see +the +.Cm load-key +subcommand for more info on key loading). +.Pp +Creating an encrypted dataset requires specifying the +.Sy encryption No and Sy keyformat +properties at creation time, along with an optional +.Sy keylocation No and Sy pbkdf2iters . +After entering an encryption key, the +created dataset will become an encryption root. +Any descendant datasets will +inherit their encryption key from the encryption root by default, meaning that +loading, unloading, or changing the key for the encryption root will implicitly +do the same for all inheriting datasets. +If this inheritance is not desired, simply supply a +.Sy keyformat +when creating the child dataset or use +.Nm zfs Cm change-key +to break an existing relationship, creating a new encryption root on the child. +Note that the child's +.Sy keyformat +may match that of the parent while still creating a new encryption root, and +that changing the +.Sy encryption +property alone does not create a new encryption root; this would simply use a +different cipher suite with the same key as its encryption root. +The one exception is that clones will always use their origin's encryption key. +As a result of this exception, some encryption-related properties +.Pq namely Sy keystatus , keyformat , keylocation , No and Sy pbkdf2iters +do not inherit like other ZFS properties and instead use the value determined +by their encryption root. +Encryption root inheritance can be tracked via the read-only +.Sy encryptionroot +property. +.Pp +Encryption changes the behavior of a few ZFS +operations. +Encryption is applied after compression so compression ratios are preserved. +Normally checksums in ZFS are 256 bits long, but for encrypted data +the checksum is 128 bits of the user-chosen checksum and 128 bits of MAC from +the encryption suite, which provides additional protection against maliciously +altered data. +Deduplication is still possible with encryption enabled but for security, +datasets will only deduplicate against themselves, their snapshots, +and their clones. +.Pp +There are a few limitations on encrypted datasets. +Encrypted data cannot be embedded via the +.Sy embedded_data +feature. +Encrypted datasets may not have +.Sy copies Ns = Ns Em 3 +since the implementation stores some encryption metadata where the third copy +would normally be. +Since compression is applied before encryption, datasets may +be vulnerable to a CRIME-like attack if applications accessing the data allow +for it. +Deduplication with encryption will leak information about which blocks +are equivalent in a dataset and will incur an extra CPU cost for each block +written. +. +.Sh SEE ALSO +.Xr zfsprops 7 , +.Xr zfs-create 8 , +.Xr zfs-set 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-mount-generator.8.in b/sys/contrib/openzfs/man/man8/zfs-mount-generator.8.in new file mode 100644 index 000000000000..9e44ea30c636 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-mount-generator.8.in @@ -0,0 +1,183 @@ +.\" SPDX-License-Identifier: MIT +.\" +.\" Copyright 2018 Antonio Russo <antonio.e.russo@gmail.com> +.\" Copyright 2019 Kjeld Schouten-Lebbing <kjeld@schouten-lebbing.nl> +.\" Copyright 2020 InsanePrawn <insane.prawny@gmail.com> +.\" +.\" Permission is hereby granted, free of charge, to any person obtaining +.\" a copy of this software and associated documentation files (the +.\" "Software"), to deal in the Software without restriction, including +.\" without limitation the rights to use, copy, modify, merge, publish, +.\" distribute, sublicense, and/or sell copies of the Software, and to +.\" permit persons to whom the Software is furnished to do so, subject to +.\" the following conditions: +.\" +.\" The above copyright notice and this permission notice shall be +.\" included in all copies or substantial portions of the Software. +.\" +.\" THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +.\" EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +.\" MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +.\" NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +.\" LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +.\" OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +.\" WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. +.\" +.Dd November 30, 2021 +.Dt ZFS-MOUNT-GENERATOR 8 +.Os +. +.Sh NAME +.Nm zfs-mount-generator +.Nd generate systemd mount units for ZFS filesystems +.Sh SYNOPSIS +.Pa @systemdgeneratordir@/zfs-mount-generator +. +.Sh DESCRIPTION +.Nm +is a +.Xr systemd.generator 7 +that generates native +.Xr systemd.mount 5 +units for configured ZFS datasets. +. +.Ss Properties +.Bl -tag -compact -width "org.openzfs.systemd:required-by=unit[ unit]…" +.It Sy mountpoint Ns = +.No Skipped if Sy legacy No or Sy none . +. +.It Sy canmount Ns = +.No Skipped if Sy off . +.No Skipped if only Sy noauto +datasets exist for a given mountpoint and there's more than one. +.No Datasets with Sy yes No take precedence over ones with Sy noauto No for the same mountpoint . +.No Sets logical Em noauto No flag if Sy noauto . +Encryption roots always generate +.Sy zfs-load-key@ Ns Ar root Ns Sy .service , +even if +.Sy off . +. +.It Sy atime Ns = , Sy relatime Ns = , Sy devices Ns = , Sy exec Ns = , Sy readonly Ns = , Sy setuid Ns = , Sy nbmand Ns = +Used to generate mount options equivalent to +.Nm zfs Cm mount . +. +.It Sy encroot Ns = , Sy keylocation Ns = +If the dataset is an encryption root, its mount unit will bind to +.Sy zfs-load-key@ Ns Ar root Ns Sy .service , +with additional dependencies as follows: +.Bl -tag -compact -offset Ds -width "keylocation=https://URL (et al.)" +.It Sy keylocation Ns = Ns Sy prompt +None, uses +.Xr systemd-ask-password 1 +.It Sy keylocation Ns = Ns Sy https:// Ns Ar URL Pq et al.\& +.Sy Wants Ns = , Sy After Ns = : Pa network-online.target +.It Sy keylocation Ns = Ns Sy file:// Ns < Ns Ar path Ns > +.Sy RequiresMountsFor Ns = Ns Ar path +.El +. +The service also uses the same +.Sy Wants Ns = , +.Sy After Ns = , +.Sy Requires Ns = , No and +.Sy RequiresMountsFor Ns = , +as the mount unit. +. +.It Sy org.openzfs.systemd:requires Ns = Ns Pa path Ns Oo " " Ns Pa path Oc Ns … +.No Sets Sy Requires Ns = for the mount- and key-loading unit. +. +.It Sy org.openzfs.systemd:requires-mounts-for Ns = Ns Pa path Ns Oo " " Ns Pa path Oc Ns … +.No Sets Sy RequiresMountsFor Ns = for the mount- and key-loading unit. +. +.It Sy org.openzfs.systemd:before Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns … +.No Sets Sy Before Ns = for the mount unit. +. +.It Sy org.openzfs.systemd:after Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns … +.No Sets Sy After Ns = for the mount unit. +. +.It Sy org.openzfs.systemd:wanted-by Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns … +.No Sets logical Em noauto No flag (see below) . +.No If not Sy none , No sets Sy WantedBy Ns = for the mount unit. +.It Sy org.openzfs.systemd:required-by Ns = Ns Pa unit Ns Oo " " Ns Pa unit Oc Ns … +.No Sets logical Em noauto No flag (see below) . +.No If not Sy none , No sets Sy RequiredBy Ns = for the mount unit. +. +.It Sy org.openzfs.systemd:nofail Ns = Ns (unset) Ns | Ns Sy on Ns | Ns Sy off +Waxes or wanes strength of default reverse dependencies of the mount unit, see +below. +. +.It Sy org.openzfs.systemd:ignore Ns = Ns Sy on Ns | Ns Sy off +.No Skip if Sy on . +.No Defaults to Sy off . +.El +. +.Ss Unit Ordering And Dependencies +Additionally, unless the pool the dataset resides on +is imported at generation time, both units gain +.Sy Wants Ns = Ns Pa zfs-import.target +and +.Sy After Ns = Ns Pa zfs-import.target . +.Pp +Additionally, unless the logical +.Em noauto +flag is set, the mount unit gains a reverse-dependency for +.Pa local-fs.target +of strength +.Bl -tag -compact -offset Ds -width "(unset)" +.It (unset) +.Sy WantedBy Ns = No + Sy Before Ns = +.It Sy on +.Sy WantedBy Ns = +.It Sy off +.Sy RequiredBy Ns = No + Sy Before Ns = +.El +. +.Ss Cache File +Because ZFS pools may not be available very early in the boot process, +information on ZFS mountpoints must be stored separately. +The output of +.Dl Nm zfs Cm list Fl Ho Ar name , Ns Aq every property above in order +for datasets that should be mounted by systemd should be kept at +.Pa @sysconfdir@/zfs/zfs-list.cache/ Ns Ar poolname , +and, if writeable, will be kept synchronized for the entire pool by the +.Pa history_event-zfs-list-cacher.sh +ZEDLET, if enabled +.Pq see Xr zed 8 . +. +.Sh ENVIRONMENT +If the +.Sy ZFS_DEBUG +environment variable is nonzero +.Pq or unset and Pa /proc/cmdline No contains Qq Sy debug , +print summary accounting information at the end. +. +.Sh EXAMPLES +To begin, enable tracking for the pool: +.Dl # Nm touch Pa @sysconfdir@/zfs/zfs-list.cache/ Ns Ar poolname +Then enable the tracking ZEDLET: +.Dl # Nm ln Fl s Pa @zfsexecdir@/zed.d/history_event-zfs-list-cacher.sh @sysconfdir@/zfs/zed.d +.Dl # Nm systemctl Cm enable Pa zfs-zed.service +.Dl # Nm systemctl Cm restart Pa zfs-zed.service +.Pp +If no history event is in the queue, +inject one to ensure the ZEDLET runs to refresh the cache file +by setting a monitored property somewhere on the pool: +.Dl # Nm zfs Cm set Sy relatime Ns = Ns Sy off Ar poolname/dset +.Dl # Nm zfs Cm inherit Sy relatime Ar poolname/dset +.Pp +To test the generator output: +.Dl $ Nm mkdir Pa /tmp/zfs-mount-generator +.Dl $ Nm @systemdgeneratordir@/zfs-mount-generator Pa /tmp/zfs-mount-generator +. +If the generated units are satisfactory, instruct +.Nm systemd +to re-run all generators: +.Dl # Nm systemctl daemon-reload +. +.Sh SEE ALSO +.Xr systemd.mount 5 , +.Xr systemd.target 5 , +.Xr zfs 5 , +.Xr systemd.generator 7 , +.Xr systemd.special 7 , +.Xr zed 8 , +.Xr zpool-events 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-mount.8 b/sys/contrib/openzfs/man/man8/zfs-mount.8 new file mode 100644 index 000000000000..2689b6dc345b --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-mount.8 @@ -0,0 +1,140 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd October 12, 2024 +.Dt ZFS-MOUNT 8 +.Os +. +.Sh NAME +.Nm zfs-mount +.Nd manage mount state of ZFS filesystems +.Sh SYNOPSIS +.Nm zfs +.Cm mount +.Op Fl j +.Nm zfs +.Cm mount +.Op Fl Oflv +.Op Fl o Ar options +.Fl a Ns | Ns Fl R Ar filesystem Ns | Ns Ar filesystem +.Nm zfs +.Cm unmount +.Op Fl fu +.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm mount +.Op Fl j +.Xc +Displays all ZFS file systems currently mounted. +.Bl -tag -width "-j" +.It Fl j , -json +Displays all mounted file systems in JSON format. +.El +.It Xo +.Nm zfs +.Cm mount +.Op Fl Oflv +.Op Fl o Ar options +.Fl a Ns | Ns Fl R Ar filesystem Ns | Ns Ar filesystem +.Xc +Mount ZFS filesystem on a path described by its +.Sy mountpoint +property, if the path exists and is empty. +If +.Sy mountpoint +is set to +.Em legacy , +the filesystem should be instead mounted using +.Xr mount 8 . +.Bl -tag -width "-O" +.It Fl O +Perform an overlay mount. +Allows mounting in non-empty +.Sy mountpoint . +See +.Xr mount 8 +for more information. +.It Fl a +Mount all available ZFS file systems. +Invoked automatically as part of the boot process if configured. +.It Fl R +Mount the specified filesystems along with all their children. +.It Ar filesystem +Mount the specified filesystem. +.It Fl o Ar options +An optional, comma-separated list of mount options to use temporarily for the +duration of the mount. +See the +.Em Temporary Mount Point Properties +section of +.Xr zfsprops 7 +for details. +.It Fl l +Load keys for encrypted filesystems as they are being mounted. +This is equivalent to executing +.Nm zfs Cm load-key +on each encryption root before mounting it. +Note that if a filesystem has +.Sy keylocation Ns = Ns Sy prompt , +this will cause the terminal to interactively block after asking for the key. +.It Fl v +Report mount progress. +.It Fl f +Attempt to force mounting of all filesystems, even those that couldn't normally +be mounted (e.g. redacted datasets). +.El +.It Xo +.Nm zfs +.Cm unmount +.Op Fl fu +.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint +.Xc +Unmounts currently mounted ZFS file systems. +.Bl -tag -width "-a" +.It Fl a +Unmount all available ZFS file systems. +Invoked automatically as part of the shutdown process. +.It Fl f +Forcefully unmount the file system, even if it is currently in use. +This option is not supported on Linux. +.It Fl u +Unload keys for any encryption roots unmounted by this command. +.It Ar filesystem Ns | Ns Ar mountpoint +Unmount the specified filesystem. +The command can also be given a path to a ZFS file system mount point on the +system. +.El +.El diff --git a/sys/contrib/openzfs/man/man8/zfs-program.8 b/sys/contrib/openzfs/man/man8/zfs-program.8 new file mode 100644 index 000000000000..d87042c4c0c7 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-program.8 @@ -0,0 +1,673 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" This file and its contents are supplied under the terms of the +.\" Common Development and Distribution License ("CDDL"), version 1.0. +.\" You may only use this file in accordance with the terms of version +.\" 1.0 of the CDDL. +.\" +.\" A full copy of the text of the CDDL should have accompanied this +.\" source. A copy of the CDDL is also available via the Internet at +.\" http://www.illumos.org/license/CDDL. +.\" +.\" Copyright (c) 2016, 2019 by Delphix. All Rights Reserved. +.\" Copyright (c) 2019, 2020 by Christian Schwarz. All Rights Reserved. +.\" Copyright 2020 Joyent, Inc. +.\" Copyright (c) 2025, Rob Norris <robn@despairlabs.com> +.\" +.Dd June 5, 2025 +.Dt ZFS-PROGRAM 8 +.Os +. +.Sh NAME +.Nm zfs-program +.Nd execute ZFS channel programs +.Sh SYNOPSIS +.Nm zfs +.Cm program +.Op Fl jn +.Op Fl t Ar instruction-limit +.Op Fl m Ar memory-limit +.Ar pool +.Ar script +.Op Ar script arguments +. +.Sh DESCRIPTION +The ZFS channel program interface allows ZFS administrative operations to be +run programmatically as a Lua script. +The entire script is executed atomically, with no other administrative +operations taking effect concurrently. +A library of ZFS calls is made available to channel program scripts. +Channel programs may only be run with root privileges. +.Pp +A modified version of the Lua 5.2 interpreter is used to run channel program +scripts. +The Lua 5.2 manual can be found at +.Lk http://www.lua.org/manual/5.2/ +.Pp +The channel program given by +.Ar script +will be run on +.Ar pool , +and any attempts to access or modify other pools will cause an error. +. +.Sh OPTIONS +.Bl -tag -width "-t" +.It Fl j , -json +Display channel program output in JSON format. +When this flag is specified and standard output is empty - +channel program encountered an error. +The details of such an error will be printed to standard error in plain text. +.It Fl n +Executes a read-only channel program, which runs faster. +The program cannot change on-disk state by calling functions from the +zfs.sync submodule. +The program can be used to gather information such as properties and +determining if changes would succeed (zfs.check.*). +Without this flag, all pending changes must be synced to disk before a +channel program can complete. +.It Fl t Ar instruction-limit +Limit the number of Lua instructions to execute. +If a channel program executes more than the specified number of instructions, +it will be stopped and an error will be returned. +The default limit is 10 million instructions, and it can be set to a maximum of +100 million instructions. +.It Fl m Ar memory-limit +Memory limit, in bytes. +If a channel program attempts to allocate more memory than the given limit, it +will be stopped and an error returned. +The default memory limit is 10 MiB, and can be set to a maximum of 100 MiB. +.El +.Pp +All remaining argument strings will be passed directly to the Lua script as +described in the +.Sx LUA INTERFACE +section below. +. +.Sh LUA INTERFACE +A channel program can be invoked either from the command line, or via a library +call to +.Fn lzc_channel_program . +. +.Ss Arguments +Arguments passed to the channel program are converted to a Lua table. +If invoked from the command line, extra arguments to the Lua script will be +accessible as an array stored in the argument table with the key 'argv': +.Bd -literal -compact -offset indent +args = ... +argv = args["argv"] +-- argv == {1="arg1", 2="arg2", ...} +.Ed +.Pp +If invoked from the libzfs interface, an arbitrary argument list can be +passed to the channel program, which is accessible via the same +.Qq Li ... +syntax in Lua: +.Bd -literal -compact -offset indent +args = ... +-- args == {"foo"="bar", "baz"={...}, ...} +.Ed +.Pp +Note that because Lua arrays are 1-indexed, arrays passed to Lua from the +libzfs interface will have their indices incremented by 1. +That is, the element +in +.Va arr[0] +in a C array passed to a channel program will be stored in +.Va arr[1] +when accessed from Lua. +. +.Ss Return Values +Lua return statements take the form: +.Dl return ret0, ret1, ret2, ... +.Pp +Return statements returning multiple values are permitted internally in a +channel program script, but attempting to return more than one value from the +top level of the channel program is not permitted and will throw an error. +However, tables containing multiple values can still be returned. +If invoked from the command line, a return statement: +.Bd -literal -compact -offset indent +a = {foo="bar", baz=2} +return a +.Ed +.Pp +Will be output formatted as: +.Bd -literal -compact -offset indent +Channel program fully executed with return value: + return: + baz: 2 + foo: 'bar' +.Ed +. +.Ss Fatal Errors +If the channel program encounters a fatal error while running, a non-zero exit +status will be returned. +If more information about the error is available, a singleton list will be +returned detailing the error: +.Dl error: \&"error string, including Lua stack trace" +.Pp +If a fatal error is returned, the channel program may have not executed at all, +may have partially executed, or may have fully executed but failed to pass a +return value back to userland. +.Pp +If the channel program exhausts an instruction or memory limit, a fatal error +will be generated and the program will be stopped, leaving the program partially +executed. +No attempt is made to reverse or undo any operations already performed. +Note that because both the instruction count and amount of memory used by a +channel program are deterministic when run against the same inputs and +filesystem state, as long as a channel program has run successfully once, you +can guarantee that it will finish successfully against a similar size system. +.Pp +If a channel program attempts to return too large a value, the program will +fully execute but exit with a nonzero status code and no return value. +.Pp +.Em Note : +ZFS API functions do not generate Fatal Errors when correctly invoked, they +return an error code and the channel program continues executing. +See the +.Sx ZFS API +section below for function-specific details on error return codes. +. +.Ss Lua to C Value Conversion +When invoking a channel program via the libzfs interface, it is necessary to +translate arguments and return values from Lua values to their C equivalents, +and vice-versa. +.Pp +There is a correspondence between nvlist values in C and Lua tables. +A Lua table which is returned from the channel program will be recursively +converted to an nvlist, with table values converted to their natural +equivalents: +.TS +cw3 l c l . + string -> string + number -> int64 + boolean -> boolean_value + nil -> boolean (no value) + table -> nvlist +.TE +.Pp +Likewise, table keys are replaced by string equivalents as follows: +.TS +cw3 l c l . + string -> no change + number -> signed decimal string ("%lld") + boolean -> "true" | "false" +.TE +.Pp +Any collision of table key strings (for example, the string "true" and a +true boolean value) will cause a fatal error. +.Pp +Lua numbers are represented internally as signed 64-bit integers. +. +.Sh LUA STANDARD LIBRARY +The following Lua built-in base library functions are available: +.TS +cw3 l l l l . + assert rawlen collectgarbage rawget + error rawset getmetatable select + ipairs setmetatable next tonumber + pairs tostring rawequal type +.TE +.Pp +All functions in the +.Em coroutine , +.Em string , +and +.Em table +built-in submodules are also available. +A complete list and documentation of these modules is available in the Lua +manual. +.Pp +The following functions base library functions have been disabled and are +not available for use in channel programs: +.TS +cw3 l l l l l l . + dofile loadfile load pcall print xpcall +.TE +. +.Sh ZFS API +. +.Ss Function Arguments +Each API function takes a fixed set of required positional arguments and +optional keyword arguments. +For example, the destroy function takes a single positional string argument +(the name of the dataset to destroy) and an optional "defer" keyword boolean +argument. +When using parentheses to specify the arguments to a Lua function, only +positional arguments can be used: +.Dl Sy zfs.sync.destroy Ns Pq \&"rpool@snap" +.Pp +To use keyword arguments, functions must be called with a single argument that +is a Lua table containing entries mapping integers to positional arguments and +strings to keyword arguments: +.Dl Sy zfs.sync.destroy Ns Pq {1="rpool@snap", defer=true} +.Pp +The Lua language allows curly braces to be used in place of parenthesis as +syntactic sugar for this calling convention: +.Dl Sy zfs.sync.snapshot Ns {"rpool@snap", defer=true} +. +.Ss Function Return Values +If an API function succeeds, it returns 0. +If it fails, it returns an error code and the channel program continues +executing. +API functions do not generate Fatal Errors except in the case of an +unrecoverable internal file system error. +.Pp +In addition to returning an error code, some functions also return extra +details describing what caused the error. +This extra description is given as a second return value, and will always be a +Lua table, or Nil if no error details were returned. +Different keys will exist in the error details table depending on the function +and error case. +Any such function may be called expecting a single return value: +.Dl errno = Sy zfs.sync.promote Ns Pq dataset +.Pp +Or, the error details can be retrieved: +.Bd -literal -compact -offset indent +.No errno, details = Sy zfs.sync.promote Ns Pq dataset +if (errno == EEXIST) then + assert(details ~= Nil) + list_of_conflicting_snapshots = details +end +.Ed +.Pp +The following global aliases for API function error return codes are defined +for use in channel programs: +.TS +cw3 l l l l l l l . + EPERM ECHILD ENODEV ENOSPC ENOENT EAGAIN ENOTDIR + ESPIPE ESRCH ENOMEM EISDIR EROFS EINTR EACCES + EINVAL EMLINK EIO EFAULT ENFILE EPIPE ENXIO + ENOTBLK EMFILE EDOM E2BIG EBUSY ENOTTY ERANGE + ENOEXEC EEXIST ETXTBSY EDQUOT EBADF EXDEV EFBIG +.TE +. +.Ss API Functions +For detailed descriptions of the exact behavior of any ZFS administrative +operations, see the main +.Xr zfs 8 +manual page. +.Bl -tag -width "xx" +.It Fn zfs.debug msg +Record a debug message in the zfs_dbgmsg log. +A log of these messages can be printed via mdb's "::zfs_dbgmsg" command, or +can be monitored live by running +.Dl dtrace -n 'zfs-dbgmsg{trace(stringof(arg0))}' +.Pp +.Bl -tag -compact -width "property (string)" +.It Ar msg Pq string +Debug message to be printed. +.El +.It Fn zfs.exists dataset +Returns true if the given dataset exists, or false if it doesn't. +A fatal error will be thrown if the dataset is not in the target pool. +That is, in a channel program running on rpool, +.Sy zfs.exists Ns Pq \&"rpool/nonexistent_fs" +returns false, but +.Sy zfs.exists Ns Pq \&"somepool/fs_that_may_exist" +will error. +.Pp +.Bl -tag -compact -width "property (string)" +.It Ar dataset Pq string +Dataset to check for existence. +Must be in the target pool. +.El +.It Fn zfs.get_prop dataset property +Returns two values. +First, a string, number or table containing the property value for the given +dataset. +Second, a string containing the source of the property (i.e. the name of the +dataset in which it was set or nil if it is readonly). +Throws a Lua error if the dataset is invalid or the property doesn't exist. +Note that Lua only supports int64 number types whereas ZFS number properties +are uint64. +This means very large values (like GUIDs) may wrap around and appear negative. +.Pp +.Bl -tag -compact -width "property (string)" +.It Ar dataset Pq string +Filesystem or snapshot path to retrieve properties from. +.It Ar property Pq string +Name of property to retrieve. +All filesystem, snapshot and volume properties are supported except for +.Sy mounted +and +.Sy iscsioptions . +Also supports the +.Sy written@ Ns Ar snap +and +.Sy written# Ns Ar bookmark +properties and the +.Ao Sy user Ns | Ns Sy group Ac Ns Ao Sy quota Ns | Ns Sy used Ac Ns Sy @ Ns Ar id +properties, though the id must be in numeric form. +.El +.El +.Bl -tag -width "xx" +.It Sy zfs.sync submodule +The sync submodule contains functions that modify the on-disk state. +They are executed in "syncing context". +.Pp +The available sync submodule functions are as follows: +.Bl -tag -width "xx" +.It Fn zfs.sync.clone snapshot newdataset +Create a new filesystem from a snapshot. +Returns 0 if the filesystem was successfully created, +and a nonzero error code otherwise. +.Pp +Note: Due to general limitations in channel programs, a filesystem created +this way will not be mounted, regardless of the value of the +.Sy mountpoint +and +.Sy canmount +properties. +This limitation may be removed in the future, +so it is recommended that you set +.Sy mountpoint Ns = Ns Sy none +or +.Sy canmount Ns = Ns Sy off +or +.Sy noauto +to avoid surprises. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar snapshot Pq string +Name of the source snapshot to clone. +.It Ar newdataset Pq string +Name of the target dataset to create. +.El +.It Sy zfs.sync.destroy Ns Pq Ar dataset , Op Ar defer Ns = Ns Sy true Ns | Ns Sy false +Destroy the given dataset. +Returns 0 on successful destroy, or a nonzero error code if the dataset could +not be destroyed (for example, if the dataset has any active children or +clones). +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar dataset Pq string +Filesystem or snapshot to be destroyed. +.It Op Ar defer Pq boolean +Valid only for destroying snapshots. +If set to true, and the snapshot has holds or clones, allows the snapshot to be +marked for deferred deletion rather than failing. +.El +.It Fn zfs.sync.inherit dataset property +Clears the specified property in the given dataset, causing it to be inherited +from an ancestor, or restored to the default if no ancestor property is set. +The +.Nm zfs Cm inherit Fl S +option has not been implemented. +Returns 0 on success, or a nonzero error code if the property could not be +cleared. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar dataset Pq string +Filesystem or snapshot containing the property to clear. +.It Ar property Pq string +The property to clear. +Allowed properties are the same as those for the +.Nm zfs Cm inherit +command. +.El +.It Fn zfs.sync.promote dataset +Promote the given clone to a filesystem. +Returns 0 on successful promotion, or a nonzero error code otherwise. +If EEXIST is returned, the second return value will be an array of the clone's +snapshots whose names collide with snapshots of the parent filesystem. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar dataset Pq string +Clone to be promoted. +.El +.It Fn zfs.sync.rollback filesystem +Rollback to the previous snapshot for a dataset. +Returns 0 on successful rollback, or a nonzero error code otherwise. +Rollbacks can be performed on filesystems or zvols, but not on snapshots +or mounted datasets. +EBUSY is returned in the case where the filesystem is mounted. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar filesystem Pq string +Filesystem to rollback. +.El +.It Fn zfs.sync.set_prop dataset property value +Sets the given property on a dataset. +Currently only user properties are supported. +Returns 0 if the property was set, or a nonzero error code otherwise. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar dataset Pq string +The dataset where the property will be set. +.It Ar property Pq string +The property to set. +.It Ar value Pq string +The value of the property to be set. +.El +.It Fn zfs.sync.snapshot dataset +Create a snapshot of a filesystem. +Returns 0 if the snapshot was successfully created, +and a nonzero error code otherwise. +.Pp +Note: Taking a snapshot will fail on any pool older than legacy version 27. +To enable taking snapshots from ZCP scripts, the pool must be upgraded. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar dataset Pq string +Name of snapshot to create. +.El +.It Fn zfs.sync.rename_snapshot dataset oldsnapname newsnapname +Rename a snapshot of a filesystem or a volume. +Returns 0 if the snapshot was successfully renamed, +and a nonzero error code otherwise. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar dataset Pq string +Name of the snapshot's parent dataset. +.It Ar oldsnapname Pq string +Original name of the snapshot. +.It Ar newsnapname Pq string +New name of the snapshot. +.El +.It Fn zfs.sync.bookmark source newbookmark +Create a bookmark of an existing source snapshot or bookmark. +Returns 0 if the new bookmark was successfully created, +and a nonzero error code otherwise. +.Pp +Note: Bookmarking requires the corresponding pool feature to be enabled. +.Pp +.Bl -tag -compact -width "newbookmark (string)" +.It Ar source Pq string +Full name of the existing snapshot or bookmark. +.It Ar newbookmark Pq string +Full name of the new bookmark. +.El +.El +.It Sy zfs.check submodule +For each function in the +.Sy zfs.sync +submodule, there is a corresponding +.Sy zfs.check +function which performs a "dry run" of the same operation. +Each takes the same arguments as its +.Sy zfs.sync +counterpart and returns 0 if the operation would succeed, +or a non-zero error code if it would fail, along with any other error details. +That is, each has the same behavior as the corresponding sync function except +for actually executing the requested change. +For example, +.Fn zfs.check.destroy \&"fs" +returns 0 if +.Fn zfs.sync.destroy \&"fs" +would successfully destroy the dataset. +.Pp +The available +.Sy zfs.check +functions are: +.Bl -tag -compact -width "xx" +.It Fn zfs.check.clone snapshot newdataset +.It Sy zfs.check.destroy Ns Pq Ar dataset , Op Ar defer Ns = Ns Sy true Ns | Ns Sy false +.It Fn zfs.check.promote dataset +.It Fn zfs.check.rollback filesystem +.It Fn zfs.check.set_property dataset property value +.It Fn zfs.check.snapshot dataset +.El +.It Sy zfs.list submodule +The zfs.list submodule provides functions for iterating over datasets and +properties. +Rather than returning tables, these functions act as Lua iterators, and are +generally used as follows: +.Bd -literal -compact -offset indent +.No for child in Fn zfs.list.children \&"rpool" No do + ... +end +.Ed +.Pp +The available +.Sy zfs.list +functions are: +.Bl -tag -width "xx" +.It Fn zfs.list.clones snapshot +Iterate through all clones of the given snapshot. +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar snapshot Pq string +Must be a valid snapshot path in the current pool. +.El +.It Fn zfs.list.snapshots dataset +Iterate through all snapshots of the given dataset. +Each snapshot is returned as a string containing the full dataset name, +e.g. "pool/fs@snap". +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar dataset Pq string +Must be a valid filesystem or volume. +.El +.It Fn zfs.list.children dataset +Iterate through all direct children of the given dataset. +Each child is returned as a string containing the full dataset name, +e.g. "pool/fs/child". +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar dataset Pq string +Must be a valid filesystem or volume. +.El +.It Fn zfs.list.bookmarks dataset +Iterate through all bookmarks of the given dataset. +Each bookmark is returned as a string containing the full dataset name, +e.g. "pool/fs#bookmark". +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar dataset Pq string +Must be a valid filesystem or volume. +.El +.It Fn zfs.list.holds snapshot +Iterate through all user holds on the given snapshot. +Each hold is returned +as a pair of the hold's tag and the timestamp (in seconds since the epoch) at +which it was created. +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar snapshot Pq string +Must be a valid snapshot. +.El +.It Fn zfs.list.properties dataset +An alias for zfs.list.user_properties (see relevant entry). +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar dataset Pq string +Must be a valid filesystem, snapshot, or volume. +.El +.It Fn zfs.list.user_properties dataset +Iterate through all user properties for the given dataset. +For each step of the iteration, output the property name, its value, +and its source. +Throws a Lua error if the dataset is invalid. +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar dataset Pq string +Must be a valid filesystem, snapshot, or volume. +.El +.It Fn zfs.list.system_properties dataset +Returns an array of strings, the names of the valid system (non-user defined) +properties for the given dataset. +Throws a Lua error if the dataset is invalid. +.Pp +.Bl -tag -compact -width "snapshot (string)" +.It Ar dataset Pq string +Must be a valid filesystem, snapshot or volume. +.El +.El +.El +. +.Sh EXAMPLES +. +.Ss Example 1 +The following channel program recursively destroys a filesystem and all its +snapshots and children in a naive manner. +Note that this does not involve any error handling or reporting. +.Bd -literal -offset indent +function destroy_recursive(root) + for child in zfs.list.children(root) do + destroy_recursive(child) + end + for snap in zfs.list.snapshots(root) do + zfs.sync.destroy(snap) + end + zfs.sync.destroy(root) +end +destroy_recursive("pool/somefs") +.Ed +. +.Ss Example 2 +A more verbose and robust version of the same channel program, which +properly detects and reports errors, and also takes the dataset to destroy +as a command line argument, would be as follows: +.Bd -literal -offset indent +succeeded = {} +failed = {} + +function destroy_recursive(root) + for child in zfs.list.children(root) do + destroy_recursive(child) + end + for snap in zfs.list.snapshots(root) do + err = zfs.sync.destroy(snap) + if (err ~= 0) then + failed[snap] = err + else + succeeded[snap] = err + end + end + err = zfs.sync.destroy(root) + if (err ~= 0) then + failed[root] = err + else + succeeded[root] = err + end +end + +args = ... +argv = args["argv"] + +destroy_recursive(argv[1]) + +results = {} +results["succeeded"] = succeeded +results["failed"] = failed +return results +.Ed +. +.Ss Example 3 +The following function performs a forced promote operation by attempting to +promote the given clone and destroying any conflicting snapshots. +.Bd -literal -offset indent +function force_promote(ds) + errno, details = zfs.check.promote(ds) + if (errno == EEXIST) then + assert(details ~= Nil) + for i, snap in ipairs(details) do + zfs.sync.destroy(ds .. "@" .. snap) + end + elseif (errno ~= 0) then + return errno + end + return zfs.sync.promote(ds) +end +.Ed diff --git a/sys/contrib/openzfs/man/man8/zfs-project.8 b/sys/contrib/openzfs/man/man8/zfs-project.8 new file mode 100644 index 000000000000..4ebfdf6ffe4f --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-project.8 @@ -0,0 +1,143 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-PROJECT 8 +.Os +. +.Sh NAME +.Nm zfs-project +.Nd manage projects in ZFS filesystem +.Sh SYNOPSIS +.Nm zfs +.Cm project +.Oo Fl d Ns | Ns Fl r Ns Oc +.Ar file Ns | Ns Ar directory Ns … +.Nm zfs +.Cm project +.Fl C +.Oo Fl kr Ns Oc +.Ar file Ns | Ns Ar directory Ns … +.Nm zfs +.Cm project +.Fl c +.Oo Fl 0 Ns Oc +.Oo Fl d Ns | Ns Fl r Ns Oc +.Op Fl p Ar id +.Ar file Ns | Ns Ar directory Ns … +.Nm zfs +.Cm project +.Op Fl p Ar id +.Oo Fl rs Ns Oc +.Ar file Ns | Ns Ar directory Ns … +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm project +.Oo Fl d Ns | Ns Fl r Ns Oc +.Ar file Ns | Ns Ar directory Ns … +.Xc +List project identifier (ID) and inherit flag of files and directories. +.Bl -tag -width "-d" +.It Fl d +Show the directory project ID and inherit flag, not its children. +.It Fl r +List subdirectories recursively. +.El +.It Xo +.Nm zfs +.Cm project +.Fl C +.Oo Fl kr Ns Oc +.Ar file Ns | Ns Ar directory Ns … +.Xc +Clear project inherit flag and/or ID on the files and directories. +.Bl -tag -width "-k" +.It Fl k +Keep the project ID unchanged. +If not specified, the project ID will be reset to zero. +.It Fl r +Clear subdirectories' flags recursively. +.El +.It Xo +.Nm zfs +.Cm project +.Fl c +.Oo Fl 0 Ns Oc +.Oo Fl d Ns | Ns Fl r Ns Oc +.Op Fl p Ar id +.Ar file Ns | Ns Ar directory Ns … +.Xc +Check project ID and inherit flag on the files and directories: +report entries without the project inherit flag, or with project IDs different +from the +target directory's project ID or the one specified with +.Fl p . +.Bl -tag -width "-p id" +.It Fl 0 +Delimit filenames with a NUL byte instead of newline, don't output diagnoses. +.It Fl d +Check the directory project ID and inherit flag, not its children. +.It Fl p Ar id +Compare to +.Ar id +instead of the target files and directories' project IDs. +.It Fl r +Check subdirectories recursively. +.El +.It Xo +.Nm zfs +.Cm project +.Fl p Ar id +.Oo Fl rs Ns Oc +.Ar file Ns | Ns Ar directory Ns … +.Xc +Set project ID and/or inherit flag on the files and directories. +.Bl -tag -width "-p id" +.It Fl p Ar id +Set the project ID to the given value. +.It Fl r +Set on subdirectories recursively. +.It Fl s +Set project inherit flag on the given files and directories. +This is usually used for setting up tree quotas with +.Fl r . +In that case, the directory's project ID +will be set for all its descendants, unless specified explicitly with +.Fl p . +.El +.El +. +.Sh SEE ALSO +.Xr zfs-projectspace 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-projectspace.8 b/sys/contrib/openzfs/man/man8/zfs-projectspace.8 new file mode 120000 index 000000000000..8bc2f1df305e --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-projectspace.8 @@ -0,0 +1 @@ +zfs-userspace.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-promote.8 b/sys/contrib/openzfs/man/man8/zfs-promote.8 new file mode 100644 index 000000000000..435a7a5d0144 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-promote.8 @@ -0,0 +1,86 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-PROMOTE 8 +.Os +. +.Sh NAME +.Nm zfs-promote +.Nd promote clone dataset to no longer depend on origin snapshot +.Sh SYNOPSIS +.Nm zfs +.Cm promote +.Ar clone +. +.Sh DESCRIPTION +The +.Nm zfs Cm promote +command makes it possible to destroy the dataset that the clone was created +from. +The clone parent-child dependency relationship is reversed, so that the origin +dataset becomes a clone of the specified dataset. +.Pp +The snapshot that was cloned, and any snapshots previous to this snapshot, are +now owned by the promoted clone. +The space they use moves from the origin dataset to the promoted clone, so +enough space must be available to accommodate these snapshots. +No new space is consumed by this operation, but the space accounting is +adjusted. +The promoted clone must not have any conflicting snapshot names of its own. +The +.Nm zfs Cm rename +subcommand can be used to rename any conflicting snapshots. +. +.Sh EXAMPLES +.\" These are, respectively, examples 10 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Promoting a ZFS Clone +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm create Ar pool/project/production + populate /pool/project/production with data +.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today +.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta + make changes to /pool/project/beta and test them +.No # Nm zfs Cm promote Ar pool/project/beta +.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy +.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production + once the legacy version is no longer needed, it can be destroyed +.No # Nm zfs Cm destroy Ar pool/project/legacy +.Ed +. +.Sh SEE ALSO +.Xr zfs-clone 8 , +.Xr zfs-rename 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-receive.8 b/sys/contrib/openzfs/man/man8/zfs-receive.8 new file mode 100644 index 000000000000..a56b46941ef4 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-receive.8 @@ -0,0 +1,466 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd March 12, 2023 +.Dt ZFS-RECEIVE 8 +.Os +. +.Sh NAME +.Nm zfs-receive +.Nd create snapshot from backup stream +.Sh SYNOPSIS +.Nm zfs +.Cm receive +.Op Fl FhMnsuv +.Op Fl o Sy origin Ns = Ns Ar snapshot +.Op Fl o Ar property Ns = Ns Ar value +.Op Fl x Ar property +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm zfs +.Cm receive +.Op Fl FhMnsuv +.Op Fl d Ns | Ns Fl e +.Op Fl o Sy origin Ns = Ns Ar snapshot +.Op Fl o Ar property Ns = Ns Ar value +.Op Fl x Ar property +.Ar filesystem +.Nm zfs +.Cm receive +.Fl A +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm receive +.Fl c +.Op Fl vn +.Ar filesystem Ns | Ns Ar snapshot +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm receive +.Op Fl FhMnsuv +.Op Fl o Sy origin Ns = Ns Ar snapshot +.Op Fl o Ar property Ns = Ns Ar value +.Op Fl x Ar property +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.It Xo +.Nm zfs +.Cm receive +.Op Fl FhMnsuv +.Op Fl d Ns | Ns Fl e +.Op Fl o Sy origin Ns = Ns Ar snapshot +.Op Fl o Ar property Ns = Ns Ar value +.Op Fl x Ar property +.Ar filesystem +.Xc +Creates a snapshot whose contents are as specified in the stream provided on +standard input. +If a full stream is received, then a new file system is created as well. +Streams are created using the +.Nm zfs Cm send +subcommand, which by default creates a full stream. +.Nm zfs Cm recv +can be used as an alias for +.Nm zfs Cm receive . +.Pp +If an incremental stream is received, then the destination file system must +already exist, and its most recent snapshot must match the incremental stream's +source. +For +.Sy zvols , +the destination device link is destroyed and recreated, which means the +.Sy zvol +cannot be accessed during the +.Cm receive +operation. +.Pp +When a snapshot replication package stream that is generated by using the +.Nm zfs Cm send Fl R +command is received, any snapshots that do not exist on the sending location are +destroyed by using the +.Nm zfs Cm destroy Fl d +command. +.Pp +The ability to send and receive deduplicated send streams has been removed. +However, a deduplicated send stream created with older software can be converted +to a regular (non-deduplicated) stream by using the +.Nm zstream Cm redup +command. +.Pp +If +.Fl o Em property Ns = Ns Ar value +or +.Fl x Em property +is specified, it applies to the effective value of the property throughout +the entire subtree of replicated datasets. +Effective property values will be set +.Pq Fl o +or inherited +.Pq Fl x +on the topmost in the replicated subtree. +In descendant datasets, if the +property is set by the send stream, it will be overridden by forcing the +property to be inherited from the top‐most file system. +Received properties are retained in spite of being overridden +and may be restored with +.Nm zfs Cm inherit Fl S . +Specifying +.Fl o Sy origin Ns = Ns Em snapshot +is a special case because, even if +.Sy origin +is a read-only property and cannot be set, it's allowed to receive the send +stream as a clone of the given snapshot. +.Pp +Raw encrypted send streams (created with +.Nm zfs Cm send Fl w ) +may only be received as is, and cannot be re-encrypted, decrypted, or +recompressed by the receive process. +Unencrypted streams can be received as +encrypted datasets, either through inheritance or by specifying encryption +parameters with the +.Fl o +options. +Note that the +.Sy keylocation +property cannot be overridden to +.Sy prompt +during a receive. +This is because the receive process itself is already using +the standard input for the send stream. +Instead, the property can be overridden after the receive completes. +.Pp +The added security provided by raw sends adds some restrictions to the send +and receive process. +ZFS will not allow a mix of raw receives and non-raw receives. +Specifically, any raw incremental receives that are attempted after +a non-raw receive will fail. +Non-raw receives do not have this restriction and, +therefore, are always possible. +Because of this, it is best practice to always +use either raw sends for their security benefits or non-raw sends for their +flexibility when working with encrypted datasets, but not a combination. +.Pp +The reason for this restriction stems from the inherent restrictions of the +AEAD ciphers that ZFS uses to encrypt data. +When using ZFS native encryption, +each block of data is encrypted against a randomly generated number known as +the "initialization vector" (IV), which is stored in the filesystem metadata. +This number is required by the encryption algorithms whenever the data is to +be decrypted. +Together, all of the IVs provided for all of the blocks in a +given snapshot are collectively called an "IV set". +When ZFS performs a raw send, the IV set is transferred from the source +to the destination in the send stream. +When ZFS performs a non-raw send, the data is decrypted by the source +system and re-encrypted by the destination system, creating a snapshot with +effectively the same data, but a different IV set. +In order for decryption to work after a raw send, ZFS must ensure that +the IV set used on both the source and destination side match. +When an incremental raw receive is performed on +top of an existing snapshot, ZFS will check to confirm that the "from" +snapshot on both the source and destination were using the same IV set, +ensuring the new IV set is consistent. +.Pp +The name of the snapshot +.Pq and file system, if a full stream is received +that this subcommand creates depends on the argument type and the use of the +.Fl d +or +.Fl e +options. +.Pp +If the argument is a snapshot name, the specified +.Ar snapshot +is created. +If the argument is a file system or volume name, a snapshot with the same name +as the sent snapshot is created within the specified +.Ar filesystem +or +.Ar volume . +If neither of the +.Fl d +or +.Fl e +options are specified, the provided target snapshot name is used exactly as +provided. +.Pp +The +.Fl d +and +.Fl e +options cause the file system name of the target snapshot to be determined by +appending a portion of the sent snapshot's name to the specified target +.Ar filesystem . +If the +.Fl d +option is specified, all but the first element of the sent snapshot's file +system path +.Pq usually the pool name +is used and any required intermediate file systems within the specified one are +created. +If the +.Fl e +option is specified, then only the last element of the sent snapshot's file +system name +.Pq i.e. the name of the source file system itself +is used as the target file system name. +.Bl -tag -width "-F" +.It Fl F +Force a rollback of the file system to the most recent snapshot before +performing the receive operation. +If receiving an incremental replication stream +.Po for example, one generated by +.Nm zfs Cm send Fl R Op Fl i Ns | Ns Fl I +.Pc , +destroy snapshots and file systems that do not exist on the sending side. +.It Fl d +Discard the first element of the sent snapshot's file system name, using the +remaining elements to determine the name of the target file system for the new +snapshot as described in the paragraph above. +.It Fl e +Discard all but the last element of the sent snapshot's file system name, using +that element to determine the name of the target file system for the new +snapshot as described in the paragraph above. +.It Fl h +Skip the receive of holds. +There is no effect if holds are not sent. +.It Fl M +Force an unmount of the file system while receiving a snapshot. +This option is not supported on Linux. +.It Fl n +Do not actually receive the stream. +This can be useful in conjunction with the +.Fl v +option to verify the name the receive operation would use. +.It Fl o Sy origin Ns = Ns Ar snapshot +Forces the stream to be received as a clone of the given snapshot. +If the stream is a full send stream, this will create the filesystem +described by the stream as a clone of the specified snapshot. +Which snapshot was specified will not affect the success or failure of the +receive, as long as the snapshot does exist. +If the stream is an incremental send stream, all the normal verification will be +performed. +.It Fl o Em property Ns = Ns Ar value +Sets the specified property as if the command +.Nm zfs Cm set Em property Ns = Ns Ar value +was invoked immediately before the receive. +When receiving a stream from +.Nm zfs Cm send Fl R , +causes the property to be inherited by all descendant datasets, as through +.Nm zfs Cm inherit Em property +was run on any descendant datasets that have this property set on the +sending system. +.Pp +If the send stream was sent with +.Fl c +then overriding the +.Sy compression +property will have no effect on received data but the +.Sy compression +property will be set. +To have the data recompressed on receive remove the +.Fl c +flag from the send stream. +.Pp +Any editable property can be set at receive time. +Set-once properties bound +to the received data, such as +.Sy normalization +and +.Sy casesensitivity , +cannot be set at receive time even when the datasets are newly created by +.Nm zfs Cm receive . +Additionally both settable properties +.Sy version +and +.Sy volsize +cannot be set at receive time. +.Pp +The +.Fl o +option may be specified multiple times, for different properties. +An error results if the same property is specified in multiple +.Fl o +or +.Fl x +options. +.Pp +The +.Fl o +option may also be used to override encryption properties upon initial receive. +This allows unencrypted streams to be received as encrypted datasets. +To cause the received dataset (or root dataset of a recursive stream) to be +received as an encryption root, specify encryption properties in the same +manner as is required for +.Nm zfs Cm create . +For instance: +.Dl # Nm zfs Cm send Pa tank/test@snap1 | Nm zfs Cm recv Fl o Sy encryption Ns = Ns Sy on Fl o Sy keyformat Ns = Ns Sy passphrase Fl o Sy keylocation Ns = Ns Pa file:///path/to/keyfile +.Pp +Note that +.Fl o Sy keylocation Ns = Ns Sy prompt +may not be specified here, since the standard input +is already being utilized for the send stream. +Once the receive has completed, you can use +.Nm zfs Cm set +to change this setting after the fact. +Similarly, you can receive a dataset as an encrypted child by specifying +.Fl x Sy encryption +to force the property to be inherited. +Overriding encryption properties (except for +.Sy keylocation ) +is not possible with raw send streams. +.It Fl s +If the receive is interrupted, save the partially received state, rather +than deleting it. +Interruption may be due to premature termination of the stream +.Po e.g. due to network failure or failure of the remote system +if the stream is being read over a network connection +.Pc , +a checksum error in the stream, termination of the +.Nm zfs Cm receive +process, or unclean shutdown of the system. +.Pp +The receive can be resumed with a stream generated by +.Nm zfs Cm send Fl t Ar token , +where the +.Ar token +is the value of the +.Sy receive_resume_token +property of the filesystem or volume which is received into. +.Pp +To use this flag, the storage pool must have the +.Sy extensible_dataset +feature enabled. +See +.Xr zpool-features 7 +for details on ZFS feature flags. +.It Fl u +File system that is associated with the received stream is not mounted. +.It Fl v +Print verbose information about the stream and the time required to perform the +receive operation. +.It Fl x Em property +Ensures that the effective value of the specified property after the +receive is unaffected by the value of that property in the send stream (if any), +as if the property had been excluded from the send stream. +.Pp +If the specified property is not present in the send stream, this option does +nothing. +.Pp +If a received property needs to be overridden, the effective value will be +set or inherited, depending on whether the property is inheritable or not. +.Pp +In the case of an incremental update, +.Fl x +leaves any existing local setting or explicit inheritance unchanged. +.Pp +All +.Fl o +restrictions (e.g. set-once) apply equally to +.Fl x . +.El +.It Xo +.Nm zfs +.Cm receive +.Fl A +.Ar filesystem Ns | Ns Ar volume +.Xc +Abort an interrupted +.Nm zfs Cm receive Fl s , +deleting its saved partially received state. +.It Xo +.Nm zfs +.Cm receive +.Fl c +.Op Fl vn +.Ar filesystem Ns | Ns Ar snapshot +.Xc +Attempt to repair data corruption in the specified dataset, +by using the provided stream as the source of healthy data. +This method of healing can only heal data blocks present in the stream. +Metadata can not be healed by corrective receive. +Running a scrub is recommended post-healing to ensure all data corruption was +repaired. +.Pp +It's important to consider why corruption has happened in the first place. +If you have slowly failing hardware - periodically repairing the data +is not going to save you from data loss later on when the hardware fails +completely. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 12, 13 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Remotely Replicating ZFS Data +The following commands send a full stream and then an incremental stream to a +remote machine, restoring them into +.Em poolB/received/fs@a +and +.Em poolB/received/fs@b , +respectively. +.Em poolB +must contain the file system +.Em poolB/received , +and must not initially contain +.Em poolB/received/fs . +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm send Ar pool/fs@a | +.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a +.No # Nm zfs Cm send Fl i Ar a pool/fs@b | +.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs +.Ed +. +.Ss Example 2 : No Using the Nm zfs Cm receive Fl d No Option +The following command sends a full stream of +.Ar poolA/fsA/fsB@snap +to a remote machine, receiving it into +.Ar poolB/received/fsA/fsB@snap . +The +.Ar fsA/fsB@snap +portion of the received snapshot's name is determined from the name of the sent +snapshot. +.Ar poolB +must contain the file system +.Ar poolB/received . +If +.Ar poolB/received/fsA +does not exist, it is created as an empty file system. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap | +.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received +.Ed +. +.Sh SEE ALSO +.Xr zfs-send 8 , +.Xr zstream 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-recv.8 b/sys/contrib/openzfs/man/man8/zfs-recv.8 new file mode 120000 index 000000000000..f11b7add7bcc --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-recv.8 @@ -0,0 +1 @@ +zfs-receive.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-redact.8 b/sys/contrib/openzfs/man/man8/zfs-redact.8 new file mode 120000 index 000000000000..f7c605788388 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-redact.8 @@ -0,0 +1 @@ +zfs-send.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-release.8 b/sys/contrib/openzfs/man/man8/zfs-release.8 new file mode 120000 index 000000000000..58809d66a5a8 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-release.8 @@ -0,0 +1 @@ +zfs-hold.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-rename.8 b/sys/contrib/openzfs/man/man8/zfs-rename.8 new file mode 100644 index 000000000000..8fedc67469e6 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-rename.8 @@ -0,0 +1,161 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-RENAME 8 +.Os +. +.Sh NAME +.Nm zfs-rename +.Nd rename ZFS dataset +.Sh SYNOPSIS +.Nm zfs +.Cm rename +.Op Fl f +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm zfs +.Cm rename +.Fl p +.Op Fl f +.Ar filesystem Ns | Ns Ar volume +.Ar filesystem Ns | Ns Ar volume +.Nm zfs +.Cm rename +.Fl u +.Op Fl f +.Ar filesystem Ar filesystem +.Nm zfs +.Cm rename +.Fl r +.Ar snapshot Ar snapshot +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm rename +.Op Fl f +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +.It Xo +.Nm zfs +.Cm rename +.Fl p +.Op Fl f +.Ar filesystem Ns | Ns Ar volume +.Ar filesystem Ns | Ns Ar volume +.Xc +.It Xo +.Nm zfs +.Cm rename +.Fl u +.Op Fl f +.Ar filesystem +.Ar filesystem +.Xc +Renames the given dataset. +The new target can be located anywhere in the ZFS hierarchy, with the exception +of snapshots. +Snapshots can only be renamed within the parent file system or volume. +When renaming a snapshot, the parent file system of the snapshot does not need +to be specified as part of the second argument. +Renamed file systems can inherit new mount points, in which case they are +unmounted and remounted at the new mount point. +.Bl -tag -width "-a" +.It Fl f +Force unmount any file systems that need to be unmounted in the process. +This flag has no effect if used together with the +.Fl u +flag. +.It Fl p +Creates all the nonexistent parent datasets. +Datasets created in this manner are automatically mounted according to the +.Sy mountpoint +property inherited from their parent. +.It Fl u +Do not remount file systems during rename. +If a file system's +.Sy mountpoint +property is set to +.Sy legacy +or +.Sy none , +the file system is not unmounted even if this option is not given. +.El +.It Xo +.Nm zfs +.Cm rename +.Fl r +.Ar snapshot Ar snapshot +.Xc +Recursively rename the snapshots of all descendent datasets. +Snapshots are the only dataset that can be renamed recursively. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 10, 15 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Promoting a ZFS Clone +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm create Ar pool/project/production + populate /pool/project/production with data +.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today +.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta + make changes to /pool/project/beta and test them +.No # Nm zfs Cm promote Ar pool/project/beta +.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy +.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production + once the legacy version is no longer needed, it can be destroyed +.No # Nm zfs Cm destroy Ar pool/project/legacy +.Ed +. +.Ss Example 2 : No Performing a Rolling Snapshot +The following example shows how to maintain a history of snapshots with a +consistent naming scheme. +To keep a week's worth of snapshots, the user destroys the oldest snapshot, +renames the remaining snapshots, and then creates a new snapshot, as follows: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday +.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today +.Ed diff --git a/sys/contrib/openzfs/man/man8/zfs-rewrite.8 b/sys/contrib/openzfs/man/man8/zfs-rewrite.8 new file mode 100644 index 000000000000..ca5340c7e5eb --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-rewrite.8 @@ -0,0 +1,90 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2025 iXsystems, Inc. +.\" +.Dd July 23, 2025 +.Dt ZFS-REWRITE 8 +.Os +. +.Sh NAME +.Nm zfs-rewrite +.Nd rewrite specified files without modification +.Sh SYNOPSIS +.Nm zfs +.Cm rewrite +.Oo Fl Prvx Ns Oc +.Op Fl l Ar length +.Op Fl o Ar offset +.Ar file Ns | Ns Ar directory Ns … +. +.Sh DESCRIPTION +Rewrite blocks of specified +.Ar file +as is without modification at a new location and possibly with new +properties, such as checksum, compression, dedup, copies, etc, +as if they were atomically read and written back. +.Bl -tag -width "-r" +.It Fl P +Perform physical rewrite, preserving logical birth time of blocks. +By default, rewrite updates logical birth times, making blocks appear +as modified in snapshots and incremental send streams. +Physical rewrite preserves logical birth times, avoiding unnecessary +inclusion in incremental streams. +Physical rewrite requires the +.Sy physical_rewrite +feature to be enabled on the pool. +.It Fl l Ar length +Rewrite at most this number of bytes. +.It Fl o Ar offset +Start at this offset in bytes. +.It Fl r +Recurse into directories. +.It Fl v +Print names of all successfully rewritten files. +.It Fl x +Don't cross file system mount points when recursing. +.El +.Sh NOTES +Rewrite of cloned blocks and blocks that are part of any snapshots, +same as some property changes may increase pool space usage. +Holes that were never written or were previously zero-compressed are +not rewritten and will remain holes even if compression is disabled. +.Pp +If a +.Fl l +or +.Fl o +value request a rewrite to regions past the end of the file, then those +regions are silently ignored, and no error is reported. +.Pp +By default, rewritten blocks update their logical birth time, +meaning they will be included in incremental +.Nm zfs Cm send +streams as modified data. +When the +.Fl P +flag is used, rewritten blocks preserve their logical birth time, since +there are no user data changes. +. +.Sh SEE ALSO +.Xr zfsprops 7 , +.Xr zpool-features 7 diff --git a/sys/contrib/openzfs/man/man8/zfs-rollback.8 b/sys/contrib/openzfs/man/man8/zfs-rollback.8 new file mode 100644 index 000000000000..d0d4b1c7e594 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-rollback.8 @@ -0,0 +1,87 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd April 28, 2025 +.Dt ZFS-ROLLBACK 8 +.Os +. +.Sh NAME +.Nm zfs-rollback +.Nd roll ZFS dataset back to snapshot +.Sh SYNOPSIS +.Nm zfs +.Cm rollback +.Op Fl Rfr +.Ar snapshot +. +.Sh DESCRIPTION +When a dataset is rolled back, all data that has changed since the snapshot is +discarded, and the dataset reverts to the state at the time of the snapshot. +By default, the command refuses to roll back to a snapshot other than the most +recent one. +In order to do so, all intermediate snapshots and bookmarks must be destroyed by +specifying the +.Fl r +option. +.Pp +The +.Fl rR +options do not recursively destroy the child snapshots of a recursive snapshot. +Only direct snapshots of the specified filesystem are destroyed by either of +these options. +To completely roll back a recursive snapshot, you must roll back the individual +child snapshots. +.Bl -tag -width "-R" +.It Fl R +Destroy any more recent snapshots and bookmarks, as well as any clones of those +snapshots. +.It Fl f +Used with the +.Fl R +option to force an unmount of any clone file systems that are to be destroyed. +.It Fl r +Destroy any snapshots and bookmarks more recent than the one specified. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 8 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Rolling Back a ZFS File System +The following command reverts the contents of +.Ar pool/home/anne +to the snapshot named +.Ar yesterday , +deleting all intermediate snapshots: +.Dl # Nm zfs Cm rollback Fl r Ar pool/home/anne Ns @ Ns Ar yesterday +. +.Sh SEE ALSO +.Xr zfs-snapshot 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-send.8 b/sys/contrib/openzfs/man/man8/zfs-send.8 new file mode 100644 index 000000000000..6c5f6b94afd5 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-send.8 @@ -0,0 +1,747 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" Copyright (c) 2024, Klara, Inc. +.\" +.Dd August 29, 2025 +.Dt ZFS-SEND 8 +.Os +. +.Sh NAME +.Nm zfs-send +.Nd generate backup stream of ZFS dataset +.Sh SYNOPSIS +.Nm zfs +.Cm send +.Op Fl DLPVbcehnpsvw +.Op Fl R Op Fl X Ar dataset Ns Oo , Ns Ar dataset Oc Ns … +.Op Oo Fl I Ns | Ns Fl i Oc Ar snapshot +.Ar snapshot +.Nm zfs +.Cm send +.Op Fl DLPVcensvw +.Op Fl i Ar snapshot Ns | Ns Ar bookmark +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Nm zfs +.Cm send +.Fl -redact Ar redaction_bookmark +.Op Fl DLPVcenpv +.Op Fl i Ar snapshot Ns | Ns Ar bookmark +.Ar snapshot +.Nm zfs +.Cm send +.Op Fl PVenv +.Fl t +.Ar receive_resume_token +.Nm zfs +.Cm send +.Op Fl PVnv +.Fl S Ar filesystem +.Nm zfs +.Cm redact +.Ar snapshot redaction_bookmark +.Ar redaction_snapshot Ns … +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm send +.Op Fl DLPVbcehnpsvw +.Op Fl R Op Fl X Ar dataset Ns Oo , Ns Ar dataset Oc Ns … +.Op Oo Fl I Ns | Ns Fl i Oc Ar snapshot +.Ar snapshot +.Xc +Creates a stream representation of the second +.Ar snapshot , +which is written to standard output. +The output can be redirected to a file or to a different system +.Po for example, using +.Xr ssh 1 +.Pc . +By default, a full stream is generated. +.Bl -tag -width "-D" +.It Fl D , -dedup +Deduplicated send is no longer supported. +This flag is accepted for backwards compatibility, but a regular, +non-deduplicated stream will be generated. +.It Fl I Ar snapshot +Generate a stream package that sends all intermediary snapshots from the first +snapshot to the second snapshot. +For example, +.Fl I Em @a Em fs@d +is similar to +.Fl i Em @a Em fs@b Ns \&; Fl i Em @b Em fs@c Ns \&; Fl i Em @c Em fs@d . +The incremental source may be specified as with the +.Fl i +option. +.It Fl L , -large-block +Generate a stream which may contain blocks larger than 128 KiB. +This flag has no effect if the +.Sy large_blocks +pool feature is disabled, or if the +.Sy recordsize +property of this filesystem has never been set above 128 KiB. +The receiving system must have the +.Sy large_blocks +pool feature enabled as well. +This flag is required if the +.Sy large_microzap +pool feature is active. +See +.Xr zpool-features 7 +for details on ZFS feature flags and the +.Sy large_blocks +feature. +.It Fl P , -parsable +Print machine-parsable verbose information about the stream package generated. +.It Fl R , -replicate +Generate a replication stream package, which will replicate the specified +file system, and all descendent file systems, up to the named snapshot. +When received, all properties, snapshots, descendent file systems, and clones +are preserved. +.Pp +If the +.Fl i +or +.Fl I +flags are used in conjunction with the +.Fl R +flag, an incremental replication stream is generated. +The current values of properties, and current snapshot and file system names are +set when the stream is received. +If the +.Fl F +flag is specified when this stream is received, snapshots and file systems that +do not exist on the sending side are destroyed. +If the +.Fl R +flag is used to send encrypted datasets, then +.Fl w +must also be specified. +.It Fl V , -proctitle +Set the process title to a per-second report of how much data has been sent. +.It Fl X , -exclude Ar dataset Ns Oo , Ns Ar dataset Oc Ns … +With +.Fl R , +.Fl X +specifies a set of datasets (and, hence, their descendants), +to be excluded from the send stream. +The root dataset may not be excluded. +.Fl X Ar a Fl X Ar b +is equivalent to +.Fl X Ar a , Ns Ar b . +.It Fl e , -embed +Generate a more compact stream by using +.Sy WRITE_EMBEDDED +records for blocks which are stored more compactly on disk by the +.Sy embedded_data +pool feature. +This flag has no effect if the +.Sy embedded_data +feature is disabled. +The receiving system must have the +.Sy embedded_data +feature enabled. +If the +.Sy lz4_compress +or +.Sy zstd_compress +features are active on the sending system, then the receiving system must have +the corresponding features enabled as well. +Datasets that are sent with this flag may not be +received as an encrypted dataset, since encrypted datasets cannot use the +.Sy embedded_data +feature. +See +.Xr zpool-features 7 +for details on ZFS feature flags and the +.Sy embedded_data +feature. +.It Fl b , -backup +Sends only received property values whether or not they are overridden by local +settings, but only if the dataset has ever been received. +Use this option when you want +.Nm zfs Cm receive +to restore received properties backed up on the sent dataset and to avoid +sending local settings that may have nothing to do with the source dataset, +but only with how the data is backed up. +.It Fl c , -compressed +Generate a more compact stream by using compressed WRITE records for blocks +which are compressed on disk and in memory +.Po see the +.Sy compression +property for details +.Pc . +If the +.Sy lz4_compress +or +.Sy zstd_compress +features are active on the sending system, then the receiving system must have +the corresponding features enabled as well. +If the +.Sy large_blocks +feature is enabled on the sending system but the +.Fl L +option is not supplied in conjunction with +.Fl c , +then the data will be decompressed before sending so it can be split into +smaller block sizes. +Streams sent with +.Fl c +will not have their data recompressed on the receiver side using +.Fl o Sy compress Ns = Ar value . +The data will stay compressed as it was from the sender. +The new compression property will be set for future data. +Note that uncompressed data from the sender will still attempt to +compress on the receiver, unless you specify +.Fl o Sy compress Ns = Em off . +.It Fl w , -raw +For encrypted datasets, send data exactly as it exists on disk. +This allows backups to be taken even if encryption keys are not currently +loaded. +The backup may then be received on an untrusted machine since that machine will +not have the encryption keys to read the protected data or alter it without +being detected. +Upon being received, the dataset will have the same encryption +keys as it did on the send side, although the +.Sy keylocation +property will be defaulted to +.Sy prompt +if not otherwise provided. +For unencrypted datasets, this flag will be equivalent to +.Fl Lec . +Note that if you do not use this flag for sending encrypted datasets, data will +be sent unencrypted and may be re-encrypted with a different encryption key on +the receiving system, which will disable the ability to do a raw send to that +system for incrementals. +.It Fl h , -holds +Generate a stream package that includes any snapshot holds (created with the +.Nm zfs Cm hold +command), and indicating to +.Nm zfs Cm receive +that the holds be applied to the dataset on the receiving system. +.It Fl i Ar snapshot +Generate an incremental stream from the first +.Ar snapshot +.Pq the incremental source +to the second +.Ar snapshot +.Pq the incremental target . +The incremental source can be specified as the last component of the snapshot +name +.Po the +.Sy @ +character and following +.Pc +and it is assumed to be from the same file system as the incremental target. +.Pp +If the destination is a clone, the source may be the origin snapshot, which must +be fully specified +.Po for example, +.Em pool/fs@origin , +not just +.Em @origin +.Pc . +.It Fl n , -dryrun +Do a dry-run +.Pq Qq No-op +send. +Do not generate any actual send data. +This is useful in conjunction with the +.Fl v +or +.Fl P +flags to determine what data will be sent. +In this case, the verbose output will be written to standard output +.Po contrast with a non-dry-run, where the stream is written to standard output +and the verbose output goes to standard error +.Pc . +.It Fl p , -props +Include the dataset's properties in the stream. +This flag is implicit when +.Fl R +is specified. +The receiving system must also support this feature. +Sends of encrypted datasets must use +.Fl w +when using this flag. +.It Fl s , -skip-missing +Allows sending a replication stream even when there are snapshots missing in the +hierarchy. +When a snapshot is missing, instead of throwing an error and aborting the send, +a warning is printed to the standard error stream and the dataset to which it +belongs +and its descendants are skipped. +This flag can only be used in conjunction with +.Fl R . +.It Fl v , -verbose +Print verbose information about the stream package generated. +This information includes a per-second report of how much data has been sent. +The same report can be requested by sending +.Dv SIGINFO +or +.Dv SIGUSR1 , +regardless of +.Fl v . +.Pp +The format of the stream is committed. +You will be able to receive your streams on future versions of ZFS. +.El +.It Xo +.Nm zfs +.Cm send +.Op Fl DLPVcenvw +.Op Fl i Ar snapshot Ns | Ns Ar bookmark +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot +.Xc +Generate a send stream, which may be of a filesystem, and may be incremental +from a bookmark. +If the destination is a filesystem or volume, the pool must be read-only, or the +filesystem must not be mounted. +When the stream generated from a filesystem or volume is received, the default +snapshot name will be +.Qq --head-- . +.Bl -tag -width "-D" +.It Fl D , -dedup +Deduplicated send is no longer supported. +This flag is accepted for backwards compatibility, but a regular, +non-deduplicated stream will be generated. +.It Fl L , -large-block +Generate a stream which may contain blocks larger than 128 KiB. +This flag has no effect if the +.Sy large_blocks +pool feature is disabled, or if the +.Sy recordsize +property of this filesystem has never been set above 128 KiB. +The receiving system must have the +.Sy large_blocks +pool feature enabled as well. +See +.Xr zpool-features 7 +for details on ZFS feature flags and the +.Sy large_blocks +feature. +.It Fl P , -parsable +Print machine-parsable verbose information about the stream package generated. +.It Fl c , -compressed +Generate a more compact stream by using compressed WRITE records for blocks +which are compressed on disk and in memory +.Po see the +.Sy compression +property for details +.Pc . +If the +.Sy lz4_compress +or +.Sy zstd_compress +features are active on the sending system, then the receiving system must have +the corresponding features enabled as well. +If the +.Sy large_blocks +feature is enabled on the sending system but the +.Fl L +option is not supplied in conjunction with +.Fl c , +then the data will be decompressed before sending so it can be split into +smaller block sizes. +.It Fl w , -raw +For encrypted datasets, send data exactly as it exists on disk. +This allows backups to be taken even if encryption keys are not currently +loaded. +The backup may then be received on an untrusted machine since that machine will +not have the encryption keys to read the protected data or alter it without +being detected. +Upon being received, the dataset will have the same encryption +keys as it did on the send side, although the +.Sy keylocation +property will be defaulted to +.Sy prompt +if not otherwise provided. +For unencrypted datasets, this flag will be equivalent to +.Fl Lec . +Note that if you do not use this flag for sending encrypted datasets, data will +be sent unencrypted and may be re-encrypted with a different encryption key on +the receiving system, which will disable the ability to do a raw send to that +system for incrementals. +.It Fl e , -embed +Generate a more compact stream by using +.Sy WRITE_EMBEDDED +records for blocks which are stored more compactly on disk by the +.Sy embedded_data +pool feature. +This flag has no effect if the +.Sy embedded_data +feature is disabled. +The receiving system must have the +.Sy embedded_data +feature enabled. +If the +.Sy lz4_compress +or +.Sy zstd_compress +features are active on the sending system, then the receiving system must have +the corresponding features enabled as well. +Datasets that are sent with this flag may not be received as an encrypted +dataset, +since encrypted datasets cannot use the +.Sy embedded_data +feature. +See +.Xr zpool-features 7 +for details on ZFS feature flags and the +.Sy embedded_data +feature. +.It Fl i Ar snapshot Ns | Ns Ar bookmark +Generate an incremental send stream. +The incremental source must be an earlier snapshot in the destination's history. +It will commonly be an earlier snapshot in the destination's file system, in +which case it can be specified as the last component of the name +.Po the +.Sy # +or +.Sy @ +character and following +.Pc . +.Pp +If the incremental target is a clone, the incremental source can be the origin +snapshot, or an earlier snapshot in the origin's filesystem, or the origin's +origin, etc. +.It Fl n , -dryrun +Do a dry-run +.Pq Qq No-op +send. +Do not generate any actual send data. +This is useful in conjunction with the +.Fl v +or +.Fl P +flags to determine what data will be sent. +In this case, the verbose output will be written to standard output +.Po contrast with a non-dry-run, where the stream is written to standard output +and the verbose output goes to standard error +.Pc . +.It Fl v , -verbose +Print verbose information about the stream package generated. +This information includes a per-second report of how much data has been sent. +The same report can be requested by sending +.Dv SIGINFO +or +.Dv SIGUSR1 , +regardless of +.Fl v . +.El +.It Xo +.Nm zfs +.Cm send +.Fl -redact Ar redaction_bookmark +.Op Fl DLPVcenpv +.Op Fl i Ar snapshot Ns | Ns Ar bookmark +.Ar snapshot +.Xc +Generate a redacted send stream. +This send stream contains all blocks from the snapshot being sent that aren't +included in the redaction list contained in the bookmark specified by the +.Fl -redact +(or +.Fl d ) +flag. +The resulting send stream is said to be redacted with respect to the snapshots +the bookmark specified by the +.Fl -redact No flag was created with . +The bookmark must have been created by running +.Nm zfs Cm redact +on the snapshot being sent. +.Pp +This feature can be used to allow clones of a filesystem to be made available on +a remote system, in the case where their parent need not (or needs to not) be +usable. +For example, if a filesystem contains sensitive data, and it has clones where +that sensitive data has been secured or replaced with dummy data, redacted sends +can be used to replicate the secured data without replicating the original +sensitive data, while still sharing all possible blocks. +A snapshot that has been redacted with respect to a set of snapshots will +contain all blocks referenced by at least one snapshot in the set, but will +contain none of the blocks referenced by none of the snapshots in the set. +In other words, if all snapshots in the set have modified a given block in the +parent, that block will not be sent; but if one or more snapshots have not +modified a block in the parent, they will still reference the parent's block, so +that block will be sent. +Note that only user data will be redacted. +.Pp +When the redacted send stream is received, we will generate a redacted +snapshot. +Due to the nature of redaction, a redacted dataset can only be used in the +following ways: +.Bl -enum -width "a." +.It +To receive, as a clone, an incremental send from the original snapshot to one +of the snapshots it was redacted with respect to. +In this case, the stream will produce a valid dataset when received because all +blocks that were redacted in the parent are guaranteed to be present in the +child's send stream. +This use case will produce a normal snapshot, which can be used just like other +snapshots. +. +.It +To receive an incremental send from the original snapshot to something +redacted with respect to a subset of the set of snapshots the initial snapshot +was redacted with respect to. +In this case, each block that was redacted in the original is still redacted +(redacting with respect to additional snapshots causes less data to be redacted +(because the snapshots define what is permitted, and everything else is +redacted)). +This use case will produce a new redacted snapshot. +.It +To receive an incremental send from a redaction bookmark of the original +snapshot that was created when redacting with respect to a subset of the set of +snapshots the initial snapshot was created with respect to +anything else. +A send stream from such a redaction bookmark will contain all of the blocks +necessary to fill in any redacted data, should it be needed, because the sending +system is aware of what blocks were originally redacted. +This will either produce a normal snapshot or a redacted one, depending on +whether the new send stream is redacted. +.It +To receive an incremental send from a redacted version of the initial +snapshot that is redacted with respect to a subject of the set of snapshots the +initial snapshot was created with respect to. +A send stream from a compatible redacted dataset will contain all of the blocks +necessary to fill in any redacted data. +This will either produce a normal snapshot or a redacted one, depending on +whether the new send stream is redacted. +.It +To receive a full send as a clone of the redacted snapshot. +Since the stream is a full send, it definitionally contains all the data needed +to create a new dataset. +This use case will either produce a normal snapshot or a redacted one, depending +on whether the full send stream was redacted. +.El +.Pp +These restrictions are detected and enforced by +.Nm zfs Cm receive ; +a redacted send stream will contain the list of snapshots that the stream is +redacted with respect to. +These are stored with the redacted snapshot, and are used to detect and +correctly handle the cases above. +Note that for technical reasons, +raw sends and redacted sends cannot be combined at this time. +.It Xo +.Nm zfs +.Cm send +.Op Fl PVenv +.Fl t +.Ar receive_resume_token +.Xc +Creates a send stream which resumes an interrupted receive. +The +.Ar receive_resume_token +is the value of this property on the filesystem or volume that was being +received into. +See the documentation for +.Nm zfs Cm receive Fl s +for more details. +.It Xo +.Nm zfs +.Cm send +.Op Fl PVnv +.Op Fl i Ar snapshot Ns | Ns Ar bookmark +.Fl S +.Ar filesystem +.Xc +Generate a send stream from a dataset that has been partially received. +.Bl -tag -width "-L" +.It Fl S , -saved +This flag requires that the specified filesystem previously received a resumable +send that did not finish and was interrupted. +In such scenarios this flag +enables the user to send this partially received state. +Using this flag will always use the last fully received snapshot +as the incremental source if it exists. +.El +.It Xo +.Nm zfs +.Cm redact +.Ar snapshot redaction_bookmark +.Ar redaction_snapshot Ns … +.Xc +Generate a new redaction bookmark. +In addition to the typical bookmark information, a redaction bookmark contains +the list of redacted blocks and the list of redaction snapshots specified. +The redacted blocks are blocks in the snapshot which are not referenced by any +of the redaction snapshots. +These blocks are found by iterating over the metadata in each redaction snapshot +to determine what has been changed since the target snapshot. +Redaction is designed to support redacted zfs sends; see the entry for +.Nm zfs Cm send +for more information on the purpose of this operation. +If a redact operation fails partway through (due to an error or a system +failure), the redaction can be resumed by rerunning the same command. +.El +.Ss Redaction +ZFS has support for a limited version of data subsetting, in the form of +redaction. +Using the +.Nm zfs Cm redact +command, a +.Sy redaction bookmark +can be created that stores a list of blocks containing sensitive information. +When provided to +.Nm zfs Cm send , +this causes a +.Sy redacted send +to occur. +Redacted sends omit the blocks containing sensitive information, +replacing them with REDACT records. +When these send streams are received, a +.Sy redacted dataset +is created. +A redacted dataset cannot be mounted by default, since it is incomplete. +It can be used to receive other send streams. +In this way datasets can be used for data backup and replication, +with all the benefits that zfs send and receive have to offer, +while protecting sensitive information from being +stored on less-trusted machines or services. +.Pp +For the purposes of redaction, there are two steps to the process. +A redact step, and a send/receive step. +First, a redaction bookmark is created. +This is done by providing the +.Nm zfs Cm redact +command with a parent snapshot, a bookmark to be created, and a number of +redaction snapshots. +These redaction snapshots must be descendants of the parent snapshot, +and they should modify data that is considered sensitive in some way. +Any blocks of data modified by all of the redaction snapshots will +be listed in the redaction bookmark, because it represents the truly sensitive +information. +When it comes to the send step, the send process will not send +the blocks listed in the redaction bookmark, instead replacing them with +REDACT records. +When received on the target system, this will create a +redacted dataset, missing the data that corresponds to the blocks in the +redaction bookmark on the sending system. +The incremental send streams from +the original parent to the redaction snapshots can then also be received on +the target system, and this will produce a complete snapshot that can be used +normally. +Incrementals from one snapshot on the parent filesystem and another +can also be done by sending from the redaction bookmark, rather than the +snapshots themselves. +.Pp +In order to make the purpose of the feature more clear, an example is provided. +Consider a zfs filesystem containing four files. +These files represent information for an online shopping service. +One file contains a list of usernames and passwords, another contains purchase +histories, +a third contains click tracking data, and a fourth contains user preferences. +The owner of this data wants to make it available for their development teams to +test against, and their market research teams to do analysis on. +The development teams need information about user preferences and the click +tracking data, while the market research teams need information about purchase +histories and user preferences. +Neither needs access to the usernames and passwords. +However, because all of this data is stored in one ZFS filesystem, +it must all be sent and received together. +In addition, the owner of the data +wants to take advantage of features like compression, checksumming, and +snapshots, so they do want to continue to use ZFS to store and transmit their +data. +Redaction can help them do so. +First, they would make two clones of a snapshot of the data on the source. +In one clone, they create the setup they want their market research team to see; +they delete the usernames and passwords file, +and overwrite the click tracking data with dummy information. +In another, they create the setup they want the development teams +to see, by replacing the passwords with fake information and replacing the +purchase histories with randomly generated ones. +They would then create a redaction bookmark on the parent snapshot, +using snapshots on the two clones as redaction snapshots. +The parent can then be sent, redacted, to the target +server where the research and development teams have access. +Finally, incremental sends from the parent snapshot to each of the clones can be +sent +to and received on the target server; these snapshots are identical to the +ones on the source, and are ready to be used, while the parent snapshot on the +target contains none of the username and password data present on the source, +because it was removed by the redacted send operation. +. +.Sh SIGNALS +See +.Fl v . +. +.Sh EXAMPLES +.\" These are, respectively, examples 12, 13 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Remotely Replicating ZFS Data +The following commands send a full stream and then an incremental stream to a +remote machine, restoring them into +.Em poolB/received/fs@a +and +.Em poolB/received/fs@b , +respectively. +.Em poolB +must contain the file system +.Em poolB/received , +and must not initially contain +.Em poolB/received/fs . +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm send Ar pool/fs@a | +.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a +.No # Nm zfs Cm send Fl i Ar a pool/fs@b | +.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs +.Ed +. +.Ss Example 2 : No Using the Nm zfs Cm receive Fl d No Option +The following command sends a full stream of +.Ar poolA/fsA/fsB@snap +to a remote machine, receiving it into +.Ar poolB/received/fsA/fsB@snap . +The +.Ar fsA/fsB@snap +portion of the received snapshot's name is determined from the name of the sent +snapshot. +.Ar poolB +must contain the file system +.Ar poolB/received . +If +.Ar poolB/received/fsA +does not exist, it is created as an empty file system. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap | +.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received +.Ed +. +.Sh SEE ALSO +.Xr zfs-bookmark 8 , +.Xr zfs-receive 8 , +.Xr zfs-redact 8 , +.Xr zfs-snapshot 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-set.8 b/sys/contrib/openzfs/man/man8/zfs-set.8 new file mode 100644 index 000000000000..08daf09d05f8 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-set.8 @@ -0,0 +1,377 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd October 12, 2024 +.Dt ZFS-SET 8 +.Os +. +.Sh NAME +.Nm zfs-set +.Nd set properties on ZFS datasets +.Sh SYNOPSIS +.Nm zfs +.Cm set +.Op Fl u +.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns … +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns … +.Nm zfs +.Cm get +.Op Fl r Ns | Ns Fl d Ar depth +.Op Fl Hp +.Op Fl j Op Ar --json-int +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc +.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc +.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns … +.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns … +.Nm zfs +.Cm inherit +.Op Fl rS +.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns … +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm set +.Op Fl u +.Ar property Ns = Ns Ar value Oo Ar property Ns = Ns Ar value Oc Ns … +.Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns … +.Xc +Only some properties can be edited. +See +.Xr zfsprops 7 +for more information on what properties can be set and acceptable +values. +Numeric values can be specified as exact values, or in a human-readable form +with a suffix of +.Sy B , K , M , G , T , P , E , Z +.Po for bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, +or zettabytes, respectively +.Pc . +User properties can be set on snapshots. +For more information, see the +.Em User Properties +section of +.Xr zfsprops 7 . +.Bl -tag -width "-u" +.It Fl u +Update mountpoint, sharenfs, sharesmb property but do not mount or share the +dataset. +.El +.It Xo +.Nm zfs +.Cm get +.Op Fl r Ns | Ns Fl d Ar depth +.Op Fl Hp +.Op Fl j Op Ar --json-int +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar source Ns Oo , Ns Ar source Oc Ns … Oc +.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc +.Cm all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns … +.Oo Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns | Ns Ar bookmark Oc Ns … +.Xc +Displays properties for the given datasets. +If no datasets are specified, then the command displays properties for all +datasets on the system. +For each property, the following columns are displayed: +.Bl -tag -compact -offset 4n -width "property" +.It Sy name +Dataset name +.It Sy property +Property name +.It Sy value +Property value +.It Sy source +Property source +.Sy local , default , inherited , temporary , received , No or Sy - Pq none . +.El +.Pp +All columns are displayed by default, though this can be controlled by using the +.Fl o +option. +This command takes a comma-separated list of properties as described in the +.Sx Native Properties +and +.Sx User Properties +sections of +.Xr zfsprops 7 . +.Pp +The value +.Sy all +can be used to display all properties that apply to the given dataset's type +.Pq Sy filesystem , volume , snapshot , No or Sy bookmark . +.Bl -tag -width "-s source" +.It Fl j , -json Op Ar --json-int +Display the output in JSON format. +Specify +.Sy --json-int +to display numbers in integer format instead of strings for JSON output. +.It Fl H +Display output in a form more easily parsed by scripts. +Any headers are omitted, and fields are explicitly separated by a single tab +instead of an arbitrary amount of space. +.It Fl d Ar depth +Recursively display any children of the dataset, limiting the recursion to +.Ar depth . +A depth of +.Sy 1 +will display only the dataset and its direct children. +.It Fl o Ar field +A comma-separated list of columns to display, defaults to +.Sy name , Ns Sy property , Ns Sy value , Ns Sy source . +.It Fl p +Display numbers in parsable +.Pq exact +values. +.It Fl r +Recursively display properties for any children. +.It Fl s Ar source +A comma-separated list of sources to display. +Those properties coming from a source other than those in this list are ignored. +Each source must be one of the following: +.Sy local , default , inherited , temporary , received , No or Sy none . +The default value is all sources. +.It Fl t Ar type +A comma-separated list of types to display, where +.Ar type +is one of +.Sy filesystem , snapshot , volume , bookmark , No or Sy all . +.Sy fs , +.Sy snap , +or +.Sy vol +can be used as aliases for +.Sy filesystem , +.Sy snapshot , +or +.Sy volume . +.El +.It Xo +.Nm zfs +.Cm inherit +.Op Fl rS +.Ar property Ar filesystem Ns | Ns Ar volume Ns | Ns Ar snapshot Ns … +.Xc +Clears the specified property, causing it to be inherited from an ancestor, +restored to default if no ancestor has the property set, or with the +.Fl S +option reverted to the received value if one exists. +See +.Xr zfsprops 7 +for a listing of default values, and details on which properties can be +inherited. +.Bl -tag -width "-r" +.It Fl r +Recursively inherit the given property for all children. +.It Fl S +Revert the property to the received value, if one exists; +otherwise, for non-inheritable properties, to the default; +otherwise, operate as if the +.Fl S +option was not specified. +.El +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 1, 4, 6, 7, 11, 14, 16 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Creating a ZFS File System Hierarchy +The following commands create a file system named +.Ar pool/home +and a file system named +.Ar pool/home/bob . +The mount point +.Pa /export/home +is set for the parent file system, and is automatically inherited by the child +file system. +.Dl # Nm zfs Cm create Ar pool/home +.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home +.Dl # Nm zfs Cm create Ar pool/home/bob +. +.Ss Example 2 : No Disabling and Enabling File System Compression +The following command disables the +.Sy compression +property for all file systems under +.Ar pool/home . +The next command explicitly enables +.Sy compression +for +.Ar pool/home/anne . +.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy off Ar pool/home +.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy on Ar pool/home/anne +. +.Ss Example 3 : No Setting a Quota on a ZFS File System +The following command sets a quota of 50 Gbytes for +.Ar pool/home/bob : +.Dl # Nm zfs Cm set Sy quota Ns = Ns Ar 50G pool/home/bob +. +.Ss Example 4 : No Listing ZFS Properties +The following command lists all properties for +.Ar pool/home/bob : +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm get Sy all Ar pool/home/bob +NAME PROPERTY VALUE SOURCE +pool/home/bob type filesystem - +pool/home/bob creation Tue Jul 21 15:53 2009 - +pool/home/bob used 21K - +pool/home/bob available 20.0G - +pool/home/bob referenced 21K - +pool/home/bob compressratio 1.00x - +pool/home/bob mounted yes - +pool/home/bob quota 20G local +pool/home/bob reservation none default +pool/home/bob recordsize 128K default +pool/home/bob mountpoint /pool/home/bob default +pool/home/bob sharenfs off default +pool/home/bob checksum on default +pool/home/bob compression on local +pool/home/bob atime on default +pool/home/bob devices on default +pool/home/bob exec on default +pool/home/bob setuid on default +pool/home/bob readonly off default +pool/home/bob zoned off default +pool/home/bob snapdir hidden default +pool/home/bob acltype off default +pool/home/bob aclmode discard default +pool/home/bob aclinherit restricted default +pool/home/bob canmount on default +pool/home/bob xattr on default +pool/home/bob copies 1 default +pool/home/bob version 4 - +pool/home/bob utf8only off - +pool/home/bob normalization none - +pool/home/bob casesensitivity sensitive - +pool/home/bob vscan off default +pool/home/bob nbmand off default +pool/home/bob sharesmb off default +pool/home/bob refquota none default +pool/home/bob refreservation none default +pool/home/bob primarycache all default +pool/home/bob secondarycache all default +pool/home/bob usedbysnapshots 0 - +pool/home/bob usedbydataset 21K - +pool/home/bob usedbychildren 0 - +pool/home/bob usedbyrefreservation 0 - +.Ed +.Pp +The following command gets a single property value: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm get Fl H o Sy value compression Ar pool/home/bob +on +.Ed +.Pp +The following command gets a single property value recursively in JSON format: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm get Fl j Fl r Sy mountpoint Ar pool/home | Nm jq +{ + "output_version": { + "command": "zfs get", + "vers_major": 0, + "vers_minor": 1 + }, + "datasets": { + "pool/home": { + "name": "pool/home", + "type": "FILESYSTEM", + "pool": "pool", + "createtxg": "10", + "properties": { + "mountpoint": { + "value": "/pool/home", + "source": { + "type": "DEFAULT", + "data": "-" + } + } + } + }, + "pool/home/bob": { + "name": "pool/home/bob", + "type": "FILESYSTEM", + "pool": "pool", + "createtxg": "1176", + "properties": { + "mountpoint": { + "value": "/pool/home/bob", + "source": { + "type": "DEFAULT", + "data": "-" + } + } + } + } + } +} +.Ed +.Pp +The following command lists all properties with local settings for +.Ar pool/home/bob : +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm get Fl r s Sy local Fl o Sy name , Ns Sy property , Ns Sy value all Ar pool/home/bob +NAME PROPERTY VALUE +pool/home/bob quota 20G +pool/home/bob compression on +.Ed +. +.Ss Example 5 : No Inheriting ZFS Properties +The following command causes +.Ar pool/home/bob No and Ar pool/home/anne +to inherit the +.Sy checksum +property from their parent. +.Dl # Nm zfs Cm inherit Sy checksum Ar pool/home/bob pool/home/anne +. +.Ss Example 6 : No Setting User Properties +The following example sets the user-defined +.Ar com.example : Ns Ar department +property for a dataset: +.Dl # Nm zfs Cm set Ar com.example : Ns Ar department Ns = Ns Ar 12345 tank/accounting +. +.Ss Example 7 : No Setting sharenfs Property Options on a ZFS File System +The following commands show how to set +.Sy sharenfs +property options to enable read-write +access for a set of IP addresses and to enable root access for system +.Qq neo +on the +.Ar tank/home +file system: +.Dl # Nm zfs Cm set Sy sharenfs Ns = Ns ' Ns Ar rw Ns =@123.123.0.0/16:[::1],root= Ns Ar neo Ns ' tank/home +.Pp +If you are using DNS for host name resolution, +specify the fully-qualified hostname. +. +.Sh SEE ALSO +.Xr zfsprops 7 , +.Xr zfs-list 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-share.8 b/sys/contrib/openzfs/man/man8/zfs-share.8 new file mode 100644 index 000000000000..e9c32a44b0c7 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-share.8 @@ -0,0 +1,101 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-SHARE 8 +.Os +. +.Sh NAME +.Nm zfs-share +.Nd share and unshare ZFS filesystems +.Sh SYNOPSIS +.Nm zfs +.Cm share +.Op Fl l +.Fl a Ns | Ns Ar filesystem +.Nm zfs +.Cm unshare +.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm share +.Op Fl l +.Fl a Ns | Ns Ar filesystem +.Xc +Shares available ZFS file systems. +.Bl -tag -width "-a" +.It Fl l +Load keys for encrypted filesystems as they are being mounted. +This is equivalent to executing +.Nm zfs Cm load-key +on each encryption root before mounting it. +Note that if a filesystem has +.Sy keylocation Ns = Ns Sy prompt , +this will cause the terminal to interactively block after asking for the key. +.It Fl a +Share all available ZFS file systems. +Invoked automatically as part of the boot process. +.It Ar filesystem +Share the specified filesystem according to the +.Sy sharenfs +and +.Sy sharesmb +properties. +File systems are shared when the +.Sy sharenfs +or +.Sy sharesmb +property is set. +.El +.It Xo +.Nm zfs +.Cm unshare +.Fl a Ns | Ns Ar filesystem Ns | Ns Ar mountpoint +.Xc +Unshares currently shared ZFS file systems. +.Bl -tag -width "-a" +.It Fl a +Unshare all available ZFS file systems. +Invoked automatically as part of the shutdown process. +.It Ar filesystem Ns | Ns Ar mountpoint +Unshare the specified filesystem. +The command can also be given a path to a ZFS file system shared on the system. +.El +.El +. +.Sh SEE ALSO +.Xr exports 5 , +.Xr smb.conf 5 , +.Xr zfsprops 7 diff --git a/sys/contrib/openzfs/man/man8/zfs-snapshot.8 b/sys/contrib/openzfs/man/man8/zfs-snapshot.8 new file mode 100644 index 000000000000..8f4b2c335f09 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-snapshot.8 @@ -0,0 +1,143 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-SNAPSHOT 8 +.Os +. +.Sh NAME +.Nm zfs-snapshot +.Nd create snapshots of ZFS datasets +.Sh SYNOPSIS +.Nm zfs +.Cm snapshot +.Op Fl r +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Ar dataset Ns @ Ns Ar snapname Ns … +. +.Sh DESCRIPTION +Creates a snapshot of a dataset or multiple snapshots of different +datasets. +.Pp +Snapshots are created atomically. +That is, a snapshot is a consistent image of a dataset at a specific +point in time; it includes all modifications to the dataset made by +system calls that have successfully completed before that point in time. +Recursive snapshots created through the +.Fl r +option are all created at the same time. +.Pp +.Nm zfs Cm snap +can be used as an alias for +.Nm zfs Cm snapshot . +.Pp +See the +.Sx Snapshots +section of +.Xr zfsconcepts 7 +for details. +.Bl -tag -width "-o" +.It Fl o Ar property Ns = Ns Ar value +Set the specified property; see +.Nm zfs Cm create +for details. +.It Fl r +Recursively create snapshots of all descendent datasets +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 2, 3, 10, 15 from zfs.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Creating a ZFS Snapshot +The following command creates a snapshot named +.Ar yesterday . +This snapshot is mounted on demand in the +.Pa .zfs/snapshot +directory at the root of the +.Ar pool/home/bob +file system. +.Dl # Nm zfs Cm snapshot Ar pool/home/bob Ns @ Ns Ar yesterday +. +.Ss Example 2 : No Creating and Destroying Multiple Snapshots +The following command creates snapshots named +.Ar yesterday No of Ar pool/home +and all of its descendent file systems. +Each snapshot is mounted on demand in the +.Pa .zfs/snapshot +directory at the root of its file system. +The second command destroys the newly created snapshots. +.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday +.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday +. +.Ss Example 3 : No Promoting a ZFS Clone +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm create Ar pool/project/production + populate /pool/project/production with data +.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today +.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta + make changes to /pool/project/beta and test them +.No # Nm zfs Cm promote Ar pool/project/beta +.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy +.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production + once the legacy version is no longer needed, it can be destroyed +.No # Nm zfs Cm destroy Ar pool/project/legacy +.Ed +. +.Ss Example 4 : No Performing a Rolling Snapshot +The following example shows how to maintain a history of snapshots with a +consistent naming scheme. +To keep a week's worth of snapshots, the user destroys the oldest snapshot, +renames the remaining snapshots, and then creates a new snapshot, as follows: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday +.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today +.Ed +. +.Sh SEE ALSO +.Xr zfs-bookmark 8 , +.Xr zfs-clone 8 , +.Xr zfs-destroy 8 , +.Xr zfs-diff 8 , +.Xr zfs-hold 8 , +.Xr zfs-rename 8 , +.Xr zfs-rollback 8 , +.Xr zfs-send 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-unallow.8 b/sys/contrib/openzfs/man/man8/zfs-unallow.8 new file mode 120000 index 000000000000..8886f334bffb --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-unallow.8 @@ -0,0 +1 @@ +zfs-allow.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-unjail.8 b/sys/contrib/openzfs/man/man8/zfs-unjail.8 new file mode 120000 index 000000000000..04cc05a00258 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-unjail.8 @@ -0,0 +1 @@ +zfs-jail.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-unload-key.8 b/sys/contrib/openzfs/man/man8/zfs-unload-key.8 new file mode 120000 index 000000000000..d027a419d1e4 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-unload-key.8 @@ -0,0 +1 @@ +zfs-load-key.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-unmount.8 b/sys/contrib/openzfs/man/man8/zfs-unmount.8 new file mode 120000 index 000000000000..be0d9dbf6c01 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-unmount.8 @@ -0,0 +1 @@ +zfs-mount.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-unzone.8 b/sys/contrib/openzfs/man/man8/zfs-unzone.8 new file mode 120000 index 000000000000..9052b28aa880 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-unzone.8 @@ -0,0 +1 @@ +zfs-zone.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zfs-upgrade.8 b/sys/contrib/openzfs/man/man8/zfs-upgrade.8 new file mode 100644 index 000000000000..a5ce2b760da4 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-upgrade.8 @@ -0,0 +1,104 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-UPGRADE 8 +.Os +. +.Sh NAME +.Nm zfs-upgrade +.Nd manage on-disk version of ZFS filesystems +.Sh SYNOPSIS +.Nm zfs +.Cm upgrade +.Nm zfs +.Cm upgrade +.Fl v +.Nm zfs +.Cm upgrade +.Op Fl r +.Op Fl V Ar version +.Fl a Ns | Ns Ar filesystem +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm upgrade +.Xc +Displays a list of file systems that are not the most recent version. +.It Xo +.Nm zfs +.Cm upgrade +.Fl v +.Xc +Displays a list of currently supported file system versions. +.It Xo +.Nm zfs +.Cm upgrade +.Op Fl r +.Op Fl V Ar version +.Fl a Ns | Ns Ar filesystem +.Xc +Upgrades file systems to a new on-disk version. +Once this is done, the file systems will no longer be accessible on systems +running older versions of ZFS. +.Nm zfs Cm send +streams generated from new snapshots of these file systems cannot be accessed on +systems running older versions of ZFS. +.Pp +In general, the file system version is independent of the pool version. +See +.Xr zpool-features 7 +for information on features of ZFS storage pools. +.Pp +In some cases, the file system version and the pool version are interrelated and +the pool version must be upgraded before the file system version can be +upgraded. +.Bl -tag -width "filesystem" +.It Fl V Ar version +Upgrade to +.Ar version . +If not specified, upgrade to the most recent version. +This +option can only be used to increase the version number, and only up to the most +recent version supported by this version of ZFS. +.It Fl a +Upgrade all file systems on all imported pools. +.It Ar filesystem +Upgrade the specified file system. +.It Fl r +Upgrade the specified file system and all descendent file systems. +.El +.El +.Sh SEE ALSO +.Xr zpool-upgrade 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-userspace.8 b/sys/contrib/openzfs/man/man8/zfs-userspace.8 new file mode 100644 index 000000000000..c255d911740d --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-userspace.8 @@ -0,0 +1,189 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-USERSPACE 8 +.Os +. +.Sh NAME +.Nm zfs-userspace +.Nd display space and quotas of ZFS dataset +.Sh SYNOPSIS +.Nm zfs +.Cm userspace +.Op Fl Hinp +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar field Oc Ns … +.Oo Fl S Ar field Oc Ns … +.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc +.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path +.Nm zfs +.Cm groupspace +.Op Fl Hinp +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar field Oc Ns … +.Oo Fl S Ar field Oc Ns … +.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc +.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path +.Nm zfs +.Cm projectspace +.Op Fl Hp +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar field Oc Ns … +.Oo Fl S Ar field Oc Ns … +.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm userspace +.Op Fl Hinp +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar field Oc Ns … +.Oo Fl S Ar field Oc Ns … +.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc +.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path +.Xc +Displays space consumed by, and quotas on, each user in the specified +filesystem, +snapshot, or path. +If a path is given, the filesystem that contains that path will be used. +This corresponds to the +.Sy userused@ Ns Em user , +.Sy userobjused@ Ns Em user , +.Sy userquota@ Ns Em user , +and +.Sy userobjquota@ Ns Em user +properties. +.Bl -tag -width "-S field" +.It Fl H +Do not print headers, use tab-delimited output. +.It Fl S Ar field +Sort by this field in reverse order. +See +.Fl s . +.It Fl i +Translate SID to POSIX ID. +The POSIX ID may be ephemeral if no mapping exists. +Normal POSIX interfaces +.Pq like Xr stat 2 , Nm ls Fl l +perform this translation, so the +.Fl i +option allows the output from +.Nm zfs Cm userspace +to be compared directly with those utilities. +However, +.Fl i +may lead to confusion if some files were created by an SMB user before a +SMB-to-POSIX name mapping was established. +In such a case, some files will be owned by the SMB entity and some by the POSIX +entity. +However, the +.Fl i +option will report that the POSIX entity has the total usage and quota for both. +.It Fl n +Print numeric ID instead of user/group name. +.It Fl o Ar field Ns Oo , Ns Ar field Oc Ns … +Display only the specified fields from the following set: +.Sy type , +.Sy name , +.Sy used , +.Sy quota . +The default is to display all fields. +.It Fl p +Use exact +.Pq parsable +numeric output. +.It Fl s Ar field +Sort output by this field. +The +.Fl s +and +.Fl S +flags may be specified multiple times to sort first by one field, then by +another. +The default is +.Fl s Sy type Fl s Sy name . +.It Fl t Ar type Ns Oo , Ns Ar type Oc Ns … +Print only the specified types from the following set: +.Sy all , +.Sy posixuser , +.Sy smbuser , +.Sy posixgroup , +.Sy smbgroup . +The default is +.Fl t Sy posixuser , Ns Sy smbuser . +The default can be changed to include group types. +.El +.It Xo +.Nm zfs +.Cm groupspace +.Op Fl Hinp +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar field Oc Ns … +.Oo Fl S Ar field Oc Ns … +.Oo Fl t Ar type Ns Oo , Ns Ar type Oc Ns … Oc +.Ar filesystem Ns | Ns Ar snapshot +.Xc +Displays space consumed by, and quotas on, each group in the specified +filesystem or snapshot. +This subcommand is identical to +.Cm userspace , +except that the default types to display are +.Fl t Sy posixgroup , Ns Sy smbgroup . +.It Xo +.Nm zfs +.Cm projectspace +.Op Fl Hp +.Oo Fl o Ar field Ns Oo , Ns Ar field Oc Ns … Oc +.Oo Fl s Ar field Oc Ns … +.Oo Fl S Ar field Oc Ns … +.Ar filesystem Ns | Ns Ar snapshot Ns | Ns Ar path +.Xc +Displays space consumed by, and quotas on, each project in the specified +filesystem or snapshot. +This subcommand is identical to +.Cm userspace , +except that the project identifier is a numeral, not a name. +So need neither the option +.Fl i +for SID to POSIX ID nor +.Fl n +for numeric ID, nor +.Fl t +for types. +.El +. +.Sh SEE ALSO +.Xr zfsprops 7 , +.Xr zfs-set 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-wait.8 b/sys/contrib/openzfs/man/man8/zfs-wait.8 new file mode 100644 index 000000000000..e5c60010d2f9 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-wait.8 @@ -0,0 +1,66 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZFS-WAIT 8 +.Os +. +.Sh NAME +.Nm zfs-wait +.Nd wait for activity in ZFS filesystem to stop +.Sh SYNOPSIS +.Nm zfs +.Cm wait +.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns … +.Ar filesystem +. +.Sh DESCRIPTION +Waits until all background activity of the given types has ceased in the given +filesystem. +The activity could cease because it has completed or because the filesystem has +been destroyed or unmounted. +If no activities are specified, the command waits until background activity of +every type listed below has ceased. +If there is no activity of the given types in progress, the command returns +immediately. +.Pp +These are the possible values for +.Ar activity , +along with what each one waits for: +.Bl -tag -compact -offset Ds -width "deleteq" +.It Sy deleteq +The filesystem's internal delete queue to empty +.El +.Pp +Note that the internal delete queue does not finish draining until +all large files have had time to be fully destroyed and all open file +handles to unlinked files are closed. +. +.Sh SEE ALSO +.Xr lsof 8 diff --git a/sys/contrib/openzfs/man/man8/zfs-zone.8 b/sys/contrib/openzfs/man/man8/zfs-zone.8 new file mode 100644 index 000000000000..a56a304e82b2 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs-zone.8 @@ -0,0 +1,117 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2011, Pawel Jakub Dawidek <pjd@FreeBSD.org> +.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org> +.\" Copyright (c) 2012, Bryan Drewery <bdrewery@FreeBSD.org> +.\" Copyright (c) 2013, Steven Hartland <smh@FreeBSD.org> +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright (c) 2014, Xin LI <delphij@FreeBSD.org> +.\" Copyright (c) 2014-2015, The FreeBSD Foundation, All Rights Reserved. +.\" Copyright (c) 2016 Nexenta Systems, Inc. All Rights Reserved. +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" Copyright 2021 Klara, Inc. +.\" +.Dd July 11, 2022 +.Dt ZFS-ZONE 8 +.Os +. +.Sh NAME +.Nm zfs-zone , +.Nm zfs-unzone +.Nd attach and detach ZFS filesystems to user namespaces +.Sh SYNOPSIS +.Nm zfs Cm zone +.Ar nsfile +.Ar filesystem +.Nm zfs Cm unzone +.Ar nsfile +.Ar filesystem +. +.Sh DESCRIPTION +.Bl -tag -width "" +.It Xo +.Nm zfs +.Cm zone +.Ar nsfile +.Ar filesystem +.Xc +Attach the specified +.Ar filesystem +to the user namespace identified by +.Ar nsfile . +From now on this file system tree can be managed from within a user namespace +if the +.Sy zoned +property has been set. +.Pp +You cannot attach a zoned dataset's children to another user namespace. +You can also not attach the root file system +of the user namespace or any dataset +which needs to be mounted before the zfs service +is run inside the user namespace, +as it would be attached unmounted until it is +mounted from the service inside the user namespace. +.Pp +To allow management of the dataset from within a user namespace, the +.Sy zoned +property has to be set and the user namespaces needs access to the +.Pa /dev/zfs +device. +The +.Sy quota +property cannot be changed from within a user namespace. +.Pp +After a dataset is attached to a user namespace and the +.Sy zoned +property is set, +a zoned file system cannot be mounted outside the user namespace, +since the user namespace administrator might have set the mount point +to an unacceptable value. +.It Xo +.Nm zfs +.Cm unzone +.Ar nsfile +.Ar filesystem +.Xc +Detach the specified +.Ar filesystem +from the user namespace identified by +.Ar nsfile . +.El +.Sh EXAMPLES +.Ss Example 1 : No Delegating a Dataset to a User Namespace +The following example delegates the +.Ar tank/users +dataset to a user namespace identified by user namespace file +.Pa /proc/1234/ns/user . +.Dl # Nm zfs Cm zone Ar /proc/1234/ns/user Ar tank/users +. +.Sh SEE ALSO +.Xr zfsprops 7 diff --git a/sys/contrib/openzfs/man/man8/zfs.8 b/sys/contrib/openzfs/man/man8/zfs.8 new file mode 100644 index 000000000000..b7566a727469 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs.8 @@ -0,0 +1,845 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> +.\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. +.\" Copyright (c) 2011, Pawel Jakub Dawidek <pjd@FreeBSD.org> +.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org> +.\" Copyright (c) 2012, Bryan Drewery <bdrewery@FreeBSD.org> +.\" Copyright (c) 2013, Steven Hartland <smh@FreeBSD.org> +.\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. +.\" Copyright (c) 2014, Joyent, Inc. All rights reserved. +.\" Copyright (c) 2014 by Adam Stevko. All rights reserved. +.\" Copyright (c) 2014 Integros [integros.com] +.\" Copyright (c) 2014, Xin LI <delphij@FreeBSD.org> +.\" Copyright (c) 2014-2015, The FreeBSD Foundation, All Rights Reserved. +.\" Copyright (c) 2016 Nexenta Systems, Inc. All Rights Reserved. +.\" Copyright 2019 Richard Laager. All rights reserved. +.\" Copyright 2018 Nexenta Systems, Inc. +.\" Copyright 2019 Joyent, Inc. +.\" +.Dd May 12, 2025 +.Dt ZFS 8 +.Os +. +.Sh NAME +.Nm zfs +.Nd configure ZFS datasets +.Sh SYNOPSIS +.Nm +.Fl ?V +.Nm +.Cm version +.Op Fl j +.Nm +.Cm subcommand +.Op Ar arguments +. +.Sh DESCRIPTION +The +.Nm +command configures ZFS datasets within a ZFS storage pool, as described in +.Xr zpool 8 . +A dataset is identified by a unique path within the ZFS namespace: +.Pp +.D1 Ar pool Ns Oo Sy / Ns Ar component Oc Ns Sy / Ns Ar component +.Pp +for example: +.Pp +.Dl rpool/var/log +.Pp +The maximum length of a dataset name is +.Sy ZFS_MAX_DATASET_NAME_LEN No - 1 +ASCII characters (currently 255) satisfying +.Sy [A-Za-z_.:/ -] . +Additionally snapshots are allowed to contain a single +.Sy @ +character, while bookmarks are allowed to contain a single +.Sy # +character. +.Sy / +is used as separator between components. +The maximum amount of nesting allowed in a path is +.Sy zfs_max_dataset_nesting +levels deep. +ZFS tunables +.Pq Sy zfs_* +are explained in +.Xr zfs 4 . +.Pp +A dataset can be one of the following: +.Bl -tag -offset Ds -width "file system" +.It Sy file system +Can be mounted within the standard system namespace and behaves like other file +systems. +While ZFS file systems are designed to be POSIX-compliant, known issues exist +that prevent compliance in some cases. +Applications that depend on standards conformance might fail due to non-standard +behavior when checking file system free space. +.It Sy volume +A logical volume exported as a raw or block device. +This type of dataset should only be used when a block device is required. +File systems are typically used in most environments. +.It Sy snapshot +A read-only version of a file system or volume at a given point in time. +It is specified as +.Ar filesystem Ns @ Ns Ar name +or +.Ar volume Ns @ Ns Ar name . +.It Sy bookmark +Much like a +.Sy snapshot , +but without the hold on on-disk data. +It can be used as the source of a send (but not for a receive). +It is specified as +.Ar filesystem Ns # Ns Ar name +or +.Ar volume Ns # Ns Ar name . +.El +.Pp +See +.Xr zfsconcepts 7 +for details. +. +.Ss Properties +Properties are divided into two types: native properties and user-defined +.Pq or Qq user +properties. +Native properties either export internal statistics or control ZFS behavior. +In addition, native properties are either editable or read-only. +User properties have no effect on ZFS behavior, but you can use them to annotate +datasets in a way that is meaningful in your environment. +For more information about properties, see +.Xr zfsprops 7 . +. +.Ss Encryption +Enabling the +.Sy encryption +feature allows for the creation of encrypted filesystems and volumes. +ZFS will encrypt file and zvol data, file attributes, ACLs, permission bits, +directory listings, FUID mappings, and +.Sy userused Ns / Ns Sy groupused Ns / Ns Sy projectused +data. +For an overview of encryption, see +.Xr zfs-load-key 8 . +. +.Sh SUBCOMMANDS +All subcommands that modify state are logged persistently to the pool in their +original form. +.Bl -tag -width "" +.It Nm Fl ? +Displays a help message. +.It Xo +.Nm +.Fl V , -version +.Xc +.It Xo +.Nm +.Cm version +.Op Fl j +.Xc +Displays the software version of the +.Nm +userland utility and the zfs kernel module. +Use +.Fl j +option to output in JSON format. +.El +. +.Ss Dataset Management +.Bl -tag -width "" +.It Xr zfs-list 8 +Lists the property information for the given datasets in tabular form. +.It Xr zfs-create 8 +Creates a new ZFS file system or volume. +.It Xr zfs-destroy 8 +Destroys the given dataset(s), snapshot(s), or bookmark. +.It Xr zfs-rename 8 +Renames the given dataset (filesystem or snapshot). +.It Xr zfs-upgrade 8 +Manage upgrading the on-disk version of filesystems. +.El +. +.Ss Snapshots +.Bl -tag -width "" +.It Xr zfs-snapshot 8 +Creates snapshots with the given names. +.It Xr zfs-rollback 8 +Roll back the given dataset to a previous snapshot. +.It Xr zfs-hold 8 Ns / Ns Xr zfs-release 8 +Add or remove a hold reference to the specified snapshot or snapshots. +If a hold exists on a snapshot, attempts to destroy that snapshot by using the +.Nm zfs Cm destroy +command return +.Sy EBUSY . +.It Xr zfs-diff 8 +Display the difference between a snapshot of a given filesystem and another +snapshot of that filesystem from a later time or the current contents of the +filesystem. +.El +. +.Ss Clones +.Bl -tag -width "" +.It Xr zfs-clone 8 +Creates a clone of the given snapshot. +.It Xr zfs-promote 8 +Promotes a clone file system to no longer be dependent on its +.Qq origin +snapshot. +.El +. +.Ss Send & Receive +.Bl -tag -width "" +.It Xr zfs-send 8 +Generate a send stream, which may be of a filesystem, and may be incremental +from a bookmark. +.It Xr zfs-receive 8 +Creates a snapshot whose contents are as specified in the stream provided on +standard input. +If a full stream is received, then a new file system is created as well. +Streams are created using the +.Xr zfs-send 8 +subcommand, which by default creates a full stream. +.It Xr zfs-bookmark 8 +Creates a new bookmark of the given snapshot or bookmark. +Bookmarks mark the point in time when the snapshot was created, and can be used +as the incremental source for a +.Nm zfs Cm send +command. +.It Xr zfs-redact 8 +Generate a new redaction bookmark. +This feature can be used to allow clones of a filesystem to be made available on +a remote system, in the case where their parent need not (or needs to not) be +usable. +.El +. +.Ss Properties +.Bl -tag -width "" +.It Xr zfs-get 8 +Displays properties for the given datasets. +.It Xr zfs-set 8 +Sets the property or list of properties to the given value(s) for each dataset. +.It Xr zfs-inherit 8 +Clears the specified property, causing it to be inherited from an ancestor, +restored to default if no ancestor has the property set, or with the +.Fl S +option reverted to the received value if one exists. +.El +. +.Ss Quotas +.Bl -tag -width "" +.It Xr zfs-userspace 8 Ns / Ns Xr zfs-groupspace 8 Ns / Ns Xr zfs-projectspace 8 +Displays space consumed by, and quotas on, each user, group, or project +in the specified filesystem or snapshot. +.It Xr zfs-project 8 +List, set, or clear project ID and/or inherit flag on the files or directories. +.El +. +.Ss Mountpoints +.Bl -tag -width "" +.It Xr zfs-mount 8 +Displays all ZFS file systems currently mounted, or mount ZFS filesystem +on a path described by its +.Sy mountpoint +property. +.It Xr zfs-unmount 8 +Unmounts currently mounted ZFS file systems. +.El +. +.Ss Shares +.Bl -tag -width "" +.It Xr zfs-share 8 +Shares available ZFS file systems. +.It Xr zfs-unshare 8 +Unshares currently shared ZFS file systems. +.El +. +.Ss Delegated Administration +.Bl -tag -width "" +.It Xr zfs-allow 8 +Delegate permissions on the specified filesystem or volume. +.It Xr zfs-unallow 8 +Remove delegated permissions on the specified filesystem or volume. +.El +. +.Ss Encryption +.Bl -tag -width "" +.It Xr zfs-change-key 8 +Add or change an encryption key on the specified dataset. +.It Xr zfs-load-key 8 +Load the key for the specified encrypted dataset, enabling access. +.It Xr zfs-unload-key 8 +Unload a key for the specified dataset, +removing the ability to access the dataset. +.El +. +.Ss Channel Programs +.Bl -tag -width "" +.It Xr zfs-program 8 +Execute ZFS administrative operations +programmatically via a Lua script-language channel program. +.El +. +.Ss Data rewrite +.Bl -tag -width "" +.It Xr zfs-rewrite 8 +Rewrite specified files without modification. +.El +. +.Ss Jails +.Bl -tag -width "" +.It Xr zfs-jail 8 +Attaches a filesystem to a jail. +.It Xr zfs-unjail 8 +Detaches a filesystem from a jail. +.El +. +.Ss Waiting +.Bl -tag -width "" +.It Xr zfs-wait 8 +Wait for background activity in a filesystem to complete. +.El +. +.Sh EXIT STATUS +The +.Nm +utility exits +.Sy 0 +on success, +.Sy 1 +if an error occurs, and +.Sy 2 +if invalid command line options were specified. +. +.Sh EXAMPLES +.\" Examples 1, 4, 6, 7, 11, 14, 16 are shared with zfs-set.8. +.\" Examples 1, 10 are shared with zfs-create.8. +.\" Examples 2, 3, 10, 15 are also shared with zfs-snapshot.8. +.\" Examples 3, 10, 15 are shared with zfs-destroy.8. +.\" Examples 5 are shared with zfs-list.8. +.\" Examples 8 are shared with zfs-rollback.8. +.\" Examples 9, 10 are shared with zfs-clone.8. +.\" Examples 10 are also shared with zfs-promote.8. +.\" Examples 10, 15 also are shared with zfs-rename.8. +.\" Examples 12, 13 are shared with zfs-send.8. +.\" Examples 12, 13 are also shared with zfs-receive.8. +.\" Examples 17, 18, 19, 20, 21 are shared with zfs-allow.8. +.\" Examples 22 are shared with zfs-diff.8. +.\" Examples 23 are shared with zfs-bookmark.8. +.\" Make sure to update them omnidirectionally +.Ss Example 1 : No Creating a ZFS File System Hierarchy +The following commands create a file system named +.Ar pool/home +and a file system named +.Ar pool/home/bob . +The mount point +.Pa /export/home +is set for the parent file system, and is automatically inherited by the child +file system. +.Dl # Nm zfs Cm create Ar pool/home +.Dl # Nm zfs Cm set Sy mountpoint Ns = Ns Ar /export/home pool/home +.Dl # Nm zfs Cm create Ar pool/home/bob +. +.Ss Example 2 : No Creating a ZFS Snapshot +The following command creates a snapshot named +.Ar yesterday . +This snapshot is mounted on demand in the +.Pa .zfs/snapshot +directory at the root of the +.Ar pool/home/bob +file system. +.Dl # Nm zfs Cm snapshot Ar pool/home/bob Ns @ Ns Ar yesterday +. +.Ss Example 3 : No Creating and Destroying Multiple Snapshots +The following command creates snapshots named +.Ar yesterday No of Ar pool/home +and all of its descendent file systems. +Each snapshot is mounted on demand in the +.Pa .zfs/snapshot +directory at the root of its file system. +The second command destroys the newly created snapshots. +.Dl # Nm zfs Cm snapshot Fl r Ar pool/home Ns @ Ns Ar yesterday +.Dl # Nm zfs Cm destroy Fl r Ar pool/home Ns @ Ns Ar yesterday +. +.Ss Example 4 : No Disabling and Enabling File System Compression +The following command disables the +.Sy compression +property for all file systems under +.Ar pool/home . +The next command explicitly enables +.Sy compression +for +.Ar pool/home/anne . +.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy off Ar pool/home +.Dl # Nm zfs Cm set Sy compression Ns = Ns Sy on Ar pool/home/anne +. +.Ss Example 5 : No Listing ZFS Datasets +The following command lists all active file systems and volumes in the system. +Snapshots are displayed if +.Sy listsnaps Ns = Ns Sy on . +The default is +.Sy off . +See +.Xr zpoolprops 7 +for more information on pool properties. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm list +NAME USED AVAIL REFER MOUNTPOINT +pool 450K 457G 18K /pool +pool/home 315K 457G 21K /export/home +pool/home/anne 18K 457G 18K /export/home/anne +pool/home/bob 276K 457G 276K /export/home/bob +.Ed +. +.Ss Example 6 : No Setting a Quota on a ZFS File System +The following command sets a quota of 50 Gbytes for +.Ar pool/home/bob : +.Dl # Nm zfs Cm set Sy quota Ns = Ns Ar 50G pool/home/bob +. +.Ss Example 7 : No Listing ZFS Properties +The following command lists all properties for +.Ar pool/home/bob : +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm get Sy all Ar pool/home/bob +NAME PROPERTY VALUE SOURCE +pool/home/bob type filesystem - +pool/home/bob creation Tue Jul 21 15:53 2009 - +pool/home/bob used 21K - +pool/home/bob available 20.0G - +pool/home/bob referenced 21K - +pool/home/bob compressratio 1.00x - +pool/home/bob mounted yes - +pool/home/bob quota 20G local +pool/home/bob reservation none default +pool/home/bob recordsize 128K default +pool/home/bob mountpoint /pool/home/bob default +pool/home/bob sharenfs off default +pool/home/bob checksum on default +pool/home/bob compression on local +pool/home/bob atime on default +pool/home/bob devices on default +pool/home/bob exec on default +pool/home/bob setuid on default +pool/home/bob readonly off default +pool/home/bob zoned off default +pool/home/bob snapdir hidden default +pool/home/bob acltype off default +pool/home/bob aclmode discard default +pool/home/bob aclinherit restricted default +pool/home/bob canmount on default +pool/home/bob xattr on default +pool/home/bob copies 1 default +pool/home/bob version 4 - +pool/home/bob utf8only off - +pool/home/bob normalization none - +pool/home/bob casesensitivity sensitive - +pool/home/bob vscan off default +pool/home/bob nbmand off default +pool/home/bob sharesmb off default +pool/home/bob refquota none default +pool/home/bob refreservation none default +pool/home/bob primarycache all default +pool/home/bob secondarycache all default +pool/home/bob usedbysnapshots 0 - +pool/home/bob usedbydataset 21K - +pool/home/bob usedbychildren 0 - +pool/home/bob usedbyrefreservation 0 - +.Ed +.Pp +The following command gets a single property value: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm get Fl H o Sy value compression Ar pool/home/bob +on +.Ed +.Pp +The following command lists all properties with local settings for +.Ar pool/home/bob : +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm get Fl r s Sy local Fl o Sy name , Ns Sy property , Ns Sy value all Ar pool/home/bob +NAME PROPERTY VALUE +pool/home/bob quota 20G +pool/home/bob compression on +.Ed +. +.Ss Example 8 : No Rolling Back a ZFS File System +The following command reverts the contents of +.Ar pool/home/anne +to the snapshot named +.Ar yesterday , +deleting all intermediate snapshots: +.Dl # Nm zfs Cm rollback Fl r Ar pool/home/anne Ns @ Ns Ar yesterday +. +.Ss Example 9 : No Creating a ZFS Clone +The following command creates a writable file system whose initial contents are +the same as +.Ar pool/home/bob@yesterday . +.Dl # Nm zfs Cm clone Ar pool/home/bob@yesterday pool/clone +. +.Ss Example 10 : No Promoting a ZFS Clone +The following commands illustrate how to test out changes to a file system, and +then replace the original file system with the changed one, using clones, clone +promotion, and renaming: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm create Ar pool/project/production + populate /pool/project/production with data +.No # Nm zfs Cm snapshot Ar pool/project/production Ns @ Ns Ar today +.No # Nm zfs Cm clone Ar pool/project/production@today pool/project/beta + make changes to /pool/project/beta and test them +.No # Nm zfs Cm promote Ar pool/project/beta +.No # Nm zfs Cm rename Ar pool/project/production pool/project/legacy +.No # Nm zfs Cm rename Ar pool/project/beta pool/project/production + once the legacy version is no longer needed, it can be destroyed +.No # Nm zfs Cm destroy Ar pool/project/legacy +.Ed +. +.Ss Example 11 : No Inheriting ZFS Properties +The following command causes +.Ar pool/home/bob No and Ar pool/home/anne +to inherit the +.Sy checksum +property from their parent. +.Dl # Nm zfs Cm inherit Sy checksum Ar pool/home/bob pool/home/anne +. +.Ss Example 12 : No Remotely Replicating ZFS Data +The following commands send a full stream and then an incremental stream to a +remote machine, restoring them into +.Em poolB/received/fs@a +and +.Em poolB/received/fs@b , +respectively. +.Em poolB +must contain the file system +.Em poolB/received , +and must not initially contain +.Em poolB/received/fs . +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm send Ar pool/fs@a | +.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs Ns @ Ns Ar a +.No # Nm zfs Cm send Fl i Ar a pool/fs@b | +.No " " Nm ssh Ar host Nm zfs Cm receive Ar poolB/received/fs +.Ed +. +.Ss Example 13 : No Using the Nm zfs Cm receive Fl d No Option +The following command sends a full stream of +.Ar poolA/fsA/fsB@snap +to a remote machine, receiving it into +.Ar poolB/received/fsA/fsB@snap . +The +.Ar fsA/fsB@snap +portion of the received snapshot's name is determined from the name of the sent +snapshot. +.Ar poolB +must contain the file system +.Ar poolB/received . +If +.Ar poolB/received/fsA +does not exist, it is created as an empty file system. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm send Ar poolA/fsA/fsB@snap | +.No " " Nm ssh Ar host Nm zfs Cm receive Fl d Ar poolB/received +.Ed +. +.Ss Example 14 : No Setting User Properties +The following example sets the user-defined +.Ar com.example : Ns Ar department +property for a dataset: +.Dl # Nm zfs Cm set Ar com.example : Ns Ar department Ns = Ns Ar 12345 tank/accounting +. +.Ss Example 15 : No Performing a Rolling Snapshot +The following example shows how to maintain a history of snapshots with a +consistent naming scheme. +To keep a week's worth of snapshots, the user destroys the oldest snapshot, +renames the remaining snapshots, and then creates a new snapshot, as follows: +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm destroy Fl r Ar pool/users@7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@6daysago No @ Ns Ar 7daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@5daysago No @ Ns Ar 6daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@4daysago No @ Ns Ar 5daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@3daysago No @ Ns Ar 4daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@2daysago No @ Ns Ar 3daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@yesterday No @ Ns Ar 2daysago +.No # Nm zfs Cm rename Fl r Ar pool/users@today No @ Ns Ar yesterday +.No # Nm zfs Cm snapshot Fl r Ar pool/users Ns @ Ns Ar today +.Ed +. +.Ss Example 16 : No Setting sharenfs Property Options on a ZFS File System +The following commands show how to set +.Sy sharenfs +property options to enable read-write +access for a set of IP addresses and to enable root access for system +.Qq neo +on the +.Ar tank/home +file system: +.Dl # Nm zfs Cm set Sy sharenfs Ns = Ns ' Ns Ar rw Ns =@123.123.0.0/16:[::1],root= Ns Ar neo Ns ' tank/home +.Pp +If you are using DNS for host name resolution, +specify the fully-qualified hostname. +. +.Ss Example 17 : No Delegating ZFS Administration Permissions on a ZFS Dataset +The following example shows how to set permissions so that user +.Ar cindys +can create, destroy, mount, and take snapshots on +.Ar tank/cindys . +The permissions on +.Ar tank/cindys +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Sy cindys create , Ns Sy destroy , Ns Sy mount , Ns Sy snapshot Ar tank/cindys +.No # Nm zfs Cm allow Ar tank/cindys +---- Permissions on tank/cindys -------------------------------------- +Local+Descendent permissions: + user cindys create,destroy,mount,snapshot +.Ed +.Pp +Because the +.Ar tank/cindys +mount point permission is set to 755 by default, user +.Ar cindys +will be unable to mount file systems under +.Ar tank/cindys . +Add an ACE similar to the following syntax to provide mount point access: +.Dl # Cm chmod No A+user : Ns Ar cindys Ns :add_subdirectory:allow Ar /tank/cindys +. +.Ss Example 18 : No Delegating Create Time Permissions on a ZFS Dataset +The following example shows how to grant anyone in the group +.Ar staff +to create file systems in +.Ar tank/users . +This syntax also allows staff members to destroy their own file systems, but not +destroy anyone else's file system. +The permissions on +.Ar tank/users +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Ar staff Sy create , Ns Sy mount Ar tank/users +.No # Nm zfs Cm allow Fl c Sy destroy Ar tank/users +.No # Nm zfs Cm allow Ar tank/users +---- Permissions on tank/users --------------------------------------- +Permission sets: + destroy +Local+Descendent permissions: + group staff create,mount +.Ed +. +.Ss Example 19 : No Defining and Granting a Permission Set on a ZFS Dataset +The following example shows how to define and grant a permission set on the +.Ar tank/users +file system. +The permissions on +.Ar tank/users +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Fl s No @ Ns Ar pset Sy create , Ns Sy destroy , Ns Sy snapshot , Ns Sy mount Ar tank/users +.No # Nm zfs Cm allow staff No @ Ns Ar pset tank/users +.No # Nm zfs Cm allow Ar tank/users +---- Permissions on tank/users --------------------------------------- +Permission sets: + @pset create,destroy,mount,snapshot +Local+Descendent permissions: + group staff @pset +.Ed +. +.Ss Example 20 : No Delegating Property Permissions on a ZFS Dataset +The following example shows to grant the ability to set quotas and reservations +on the +.Ar users/home +file system. +The permissions on +.Ar users/home +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm allow Ar cindys Sy quota , Ns Sy reservation Ar users/home +.No # Nm zfs Cm allow Ar users/home +---- Permissions on users/home --------------------------------------- +Local+Descendent permissions: + user cindys quota,reservation +cindys% zfs set quota=10G users/home/marks +cindys% zfs get quota users/home/marks +NAME PROPERTY VALUE SOURCE +users/home/marks quota 10G local +.Ed +. +.Ss Example 21 : No Removing ZFS Delegated Permissions on a ZFS Dataset +The following example shows how to remove the snapshot permission from the +.Ar staff +group on the +.Sy tank/users +file system. +The permissions on +.Sy tank/users +are also displayed. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm unallow Ar staff Sy snapshot Ar tank/users +.No # Nm zfs Cm allow Ar tank/users +---- Permissions on tank/users --------------------------------------- +Permission sets: + @pset create,destroy,mount,snapshot +Local+Descendent permissions: + group staff @pset +.Ed +. +.Ss Example 22 : No Showing the differences between a snapshot and a ZFS Dataset +The following example shows how to see what has changed between a prior +snapshot of a ZFS dataset and its current state. +The +.Fl F +option is used to indicate type information for the files affected. +.Bd -literal -compact -offset Ds +.No # Nm zfs Cm diff Fl F Ar tank/test@before tank/test +M / /tank/test/ +M F /tank/test/linked (+1) +R F /tank/test/oldname -> /tank/test/newname +- F /tank/test/deleted ++ F /tank/test/created +M F /tank/test/modified +.Ed +. +.Ss Example 23 : No Creating a bookmark +The following example creates a bookmark to a snapshot. +This bookmark can then be used instead of a snapshot in send streams. +.Dl # Nm zfs Cm bookmark Ar rpool Ns @ Ns Ar snapshot rpool Ns # Ns Ar bookmark +. +.Ss Example 24 : No Setting Sy sharesmb No Property Options on a ZFS File System +The following example show how to share SMB filesystem through ZFS. +Note that a user and their password must be given. +.Dl # Nm smbmount Ar //127.0.0.1/share_tmp /mnt/tmp Fl o No user=workgroup/turbo,password=obrut,uid=1000 +.Pp +Minimal +.Pa /etc/samba/smb.conf +configuration is required, as follows. +.Pp +Samba will need to bind to the loopback interface for the ZFS utilities to +communicate with Samba. +This is the default behavior for most Linux distributions. +.Pp +Samba must be able to authenticate a user. +This can be done in a number of ways +.Pq Xr passwd 5 , LDAP , Xr smbpasswd 5 , &c.\& . +How to do this is outside the scope of this document – refer to +.Xr smb.conf 5 +for more information. +.Pp +See the +.Sx USERSHARES +section for all configuration options, +in case you need to modify any options of the share afterwards. +Do note that any changes done with the +.Xr net 8 +command will be undone if the share is ever unshared (like via a reboot). +. +.Sh ENVIRONMENT VARIABLES +.Bl -tag -width "ZFS_MODULE_TIMEOUT" +.It Sy ZFS_COLOR +Use ANSI color in +.Nm zfs Cm diff +and +.Nm zfs Cm list +output. +.It Sy ZFS_MOUNT_HELPER +Cause +.Nm zfs Cm mount +to use +.Xr mount 8 +to mount ZFS datasets. +This option is provided for backwards compatibility with older ZFS versions. +. +.It Sy ZFS_SET_PIPE_MAX +Tells +.Nm zfs +to set the maximum pipe size for sends/receives. +Disabled by default on Linux +due to an unfixed deadlock in Linux's pipe size handling code. +. +.\" Shared with zpool.8 +.It Sy ZFS_MODULE_TIMEOUT +Time, in seconds, to wait for +.Pa /dev/zfs +to appear. +Defaults to +.Sy 10 , +max +.Sy 600 Pq 10 minutes . +If +.Pf < Sy 0 , +wait forever; if +.Sy 0 , +don't wait. +.El +. +.Sh INTERFACE STABILITY +.Sy Committed . +. +.Sh SEE ALSO +.Xr attr 1 , +.Xr gzip 1 , +.Xr ssh 1 , +.Xr chmod 2 , +.Xr fsync 2 , +.Xr stat 2 , +.Xr write 2 , +.Xr acl 5 , +.Xr attributes 5 , +.Xr exports 5 , +.Xr zfsconcepts 7 , +.Xr zfsprops 7 , +.Xr exportfs 8 , +.Xr mount 8 , +.Xr net 8 , +.Xr selinux 8 , +.Xr zfs-allow 8 , +.Xr zfs-bookmark 8 , +.Xr zfs-change-key 8 , +.Xr zfs-clone 8 , +.Xr zfs-create 8 , +.Xr zfs-destroy 8 , +.Xr zfs-diff 8 , +.Xr zfs-get 8 , +.Xr zfs-groupspace 8 , +.Xr zfs-hold 8 , +.Xr zfs-inherit 8 , +.Xr zfs-jail 8 , +.Xr zfs-list 8 , +.Xr zfs-load-key 8 , +.Xr zfs-mount 8 , +.Xr zfs-program 8 , +.Xr zfs-project 8 , +.Xr zfs-projectspace 8 , +.Xr zfs-promote 8 , +.Xr zfs-receive 8 , +.Xr zfs-redact 8 , +.Xr zfs-release 8 , +.Xr zfs-rename 8 , +.Xr zfs-rollback 8 , +.Xr zfs-send 8 , +.Xr zfs-set 8 , +.Xr zfs-share 8 , +.Xr zfs-snapshot 8 , +.Xr zfs-unallow 8 , +.Xr zfs-unjail 8 , +.Xr zfs-unload-key 8 , +.Xr zfs-unmount 8 , +.Xr zfs-unshare 8 , +.Xr zfs-upgrade 8 , +.Xr zfs-userspace 8 , +.Xr zfs-wait 8 , +.Xr zpool 8 diff --git a/sys/contrib/openzfs/man/man8/zfs_ids_to_path.8 b/sys/contrib/openzfs/man/man8/zfs_ids_to_path.8 new file mode 100644 index 000000000000..465e336d170c --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs_ids_to_path.8 @@ -0,0 +1,52 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2020 by Delphix. All rights reserved. +.\" +.Dd July 11, 2022 +.Dt ZFS_IDS_TO_PATH 8 +.Os +. +.Sh NAME +.Nm zfs_ids_to_path +.Nd convert objset and object ids to names and paths +.Sh SYNOPSIS +.Nm +.Op Fl v +.Ar pool +.Ar objset-id +.Ar object-id +. +.Sh DESCRIPTION +The +.Sy zfs_ids_to_path +utility converts a provided objset and object ids +into a path to the file they refer to. +.Bl -tag -width "-D" +.It Fl v +Verbose. +Print the dataset name and the file path within the dataset separately. +This will work correctly even if the dataset is not mounted. +.El +. +.Sh SEE ALSO +.Xr zdb 8 , +.Xr zfs 8 diff --git a/sys/contrib/openzfs/man/man8/zfs_prepare_disk.8.in b/sys/contrib/openzfs/man/man8/zfs_prepare_disk.8.in new file mode 100644 index 000000000000..944f7c9e7023 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zfs_prepare_disk.8.in @@ -0,0 +1,71 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" Developed at Lawrence Livermore National Laboratory (LLNL-CODE-403049). +.\" Copyright (C) 2023 Lawrence Livermore National Security, LLC. +.\" Refer to the OpenZFS git commit log for authoritative copyright attribution. +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License Version 1.0 (CDDL-1.0). +.\" You can obtain a copy of the license from the top-level file +.\" "OPENSOLARIS.LICENSE" or at <http://opensource.org/licenses/CDDL-1.0>. +.\" You may not use this file except in compliance with the license. +.\" +.\" Developed at Lawrence Livermore National Laboratory (LLNL-CODE-403049) +.\" +.Dd August 30, 2023 +.Dt ZFS_PREPARE_DISK 8 +.Os +. +.Sh NAME +.Nm zfs_prepare_disk +.Nd special script that gets run before bringing a disk into a pool +.Sh DESCRIPTION +.Nm +is an optional script that gets called by libzfs before bringing a disk into a +pool. +It can be modified by the user to run whatever commands are necessary to prepare +a disk for inclusion into the pool. +For example, users can add lines to +.Nm zfs_prepare_disk +to do things like update the drive's firmware or check the drive's health. +.Nm zfs_prepare_disk +is optional and can be removed if not needed. +libzfs will look for the script at @zfsexecdir@/zfs_prepare_disk. +. +.Ss Properties +.Nm zfs_prepare_disk +will be passed the following environment variables: +.sp +.Bl -tag -compact -width "VDEV_ENC_SYSFS_PATH" +. +.It Nm POOL_NAME +.No Name of the pool +.It Nm VDEV_PATH +.No Path to the disk (like /dev/sda) +.It Nm VDEV_PREPARE +.No Reason why the disk is being prepared for inclusion +('create', 'add', 'replace', or 'autoreplace'). +This can be useful if you only want the script to be run under certain actions. +.It Nm VDEV_UPATH +.No Path to one of the underlying devices for the +disk. +For multipath this would return one of the /dev/sd* paths to the disk. +If the device is not a device mapper device, then +.Nm VDEV_UPATH +just returns the same value as +.Nm VDEV_PATH +.It Nm VDEV_ENC_SYSFS_PATH +.No Path to the disk's enclosure sysfs path, if available +.El +.Pp +Note that some of these variables may have a blank value. +.Nm POOL_NAME +is blank at pool creation time, for example. +.Sh ENVIRONMENT +.Nm zfs_prepare_disk +runs with a limited $PATH. +.Sh EXIT STATUS +.Nm zfs_prepare_disk +should return 0 on success, non-zero otherwise. +If non-zero is returned, the disk will not be included in the pool. +. diff --git a/sys/contrib/openzfs/man/man8/zgenhostid.8 b/sys/contrib/openzfs/man/man8/zgenhostid.8 new file mode 100644 index 000000000000..ff564880f97d --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zgenhostid.8 @@ -0,0 +1,101 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2017 by Lawrence Livermore National Security, LLC. +.\" +.Dd July 11, 2022 +.Dt ZGENHOSTID 8 +.Os +. +.Sh NAME +.Nm zgenhostid +.Nd generate host ID into /etc/hostid +.Sh SYNOPSIS +.Nm +.Op Fl f +.Op Fl o Ar filename +.Op Ar hostid +. +.Sh DESCRIPTION +Creates +.Pa /etc/hostid +file and stores the host ID in it. +If +.Ar hostid +was provided, validate and store that value. +Otherwise, randomly generate an ID. +. +.Sh OPTIONS +.Bl -tag -width "-o filename" +.It Fl h +Display a summary of the command-line options. +.It Fl f +Allow output overwrite. +.It Fl o Ar filename +Write to +.Pa filename +instead of the default +.Pa /etc/hostid . +.It Ar hostid +Specifies the value to be placed in +.Pa /etc/hostid . +It should be a number with a value between 1 and 2^32-1. +If +.Sy 0 , +generate a random ID. +This value +.Em must +be unique among your systems. +It +.Em must +be an 8-digit-long hexadecimal number, optionally prefixed by +.Qq 0x . +.El +. +.Sh FILES +.Pa /etc/hostid +. +.Sh EXAMPLES +.Bl -tag -width Bd +.It Generate a random hostid and store it +.Dl # Nm +.It Record the libc-generated hostid in Pa /etc/hostid +.Dl # Nm Qq $ Ns Pq Nm hostid +.It Record a custom hostid Po Ar 0xdeadbeef Pc in Pa /etc/hostid +.Dl # Nm Ar deadbeef +.It Record a custom hostid Po Ar 0x01234567 Pc in Pa /tmp/hostid No and overwrite the file if it exists +.Dl # Nm Fl f o Ar /tmp/hostid 0x01234567 +.El +. +.Sh SEE ALSO +.Xr genhostid 1 , +.Xr hostid 1 , +.Xr sethostid 3 , +.Xr spl 4 +. +.Sh HISTORY +.Nm +emulates the +.Xr genhostid 1 +utility and is provided for use on systems which +do not include the utility or do not provide the +.Xr sethostid 3 +function. diff --git a/sys/contrib/openzfs/man/man8/zinject.8 b/sys/contrib/openzfs/man/man8/zinject.8 new file mode 100644 index 000000000000..1d9e43aed5ec --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zinject.8 @@ -0,0 +1,311 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright 2013 Darik Horn <dajhorn@vanadac.com>. All rights reserved. +.\" Copyright (c) 2024, 2025, Klara, Inc. +.\" +.\" lint-ok: WARNING: sections out of conventional order: Sh SYNOPSIS +.\" +.Dd January 14, 2025 +.Dt ZINJECT 8 +.Os +. +.Sh NAME +.Nm zinject +.Nd ZFS Fault Injector +.Sh DESCRIPTION +.Nm +creates artificial problems in a ZFS pool by simulating data corruption +or device failures. +This program is dangerous. +. +.Sh SYNOPSIS +.Bl -tag -width Ds +.It Xo +.Nm zinject +.Xc +List injection records. +. +.It Xo +.Nm zinject +.Fl b Ar objset : Ns Ar object : Ns Ar level : Ns Ar start : Ns Ar end +.Op Fl f Ar frequency +.Fl amu +.Op pool +.Xc +Force an error into the pool at a bookmark. +. +.It Xo +.Nm zinject +.Fl c Ar id Ns | Ns Sy all +.Xc +Cancel injection records. +. +.It Xo +.Nm zinject +.Fl d Ar vdev +.Fl A Sy degrade Ns | Ns Sy fault +.Ar pool +.Xc +Force a vdev into the DEGRADED or FAULTED state. +. +.It Xo +.Nm zinject +.Fl d Ar vdev +.Fl D Ar latency : Ns Ar lanes +.Op Fl T Ar read|write +.Ar pool +.Xc +Add an artificial delay to I/O requests on a particular +device, such that the requests take a minimum of +.Ar latency +milliseconds to complete. +Each delay has an associated number of +.Ar lanes +which defines the number of concurrent +I/O requests that can be processed. +.Pp +For example, with a single lane delay of 10 ms +.No (\& Ns Fl D Ar 10 : Ns Ar 1 ) , +the device will only be able to service a single I/O request +at a time with each request taking 10 ms to complete. +So, if only a single request is submitted every 10 ms, the +average latency will be 10 ms; but if more than one request +is submitted every 10 ms, the average latency will be more +than 10 ms. +.Pp +Similarly, if a delay of 10 ms is specified to have two +lanes +.No (\& Ns Fl D Ar 10 : Ns Ar 2 ) , +then the device will be able to service +two requests at a time, each with a minimum latency of 10 ms. +So, if two requests are submitted every 10 ms, then +the average latency will be 10 ms; but if more than two +requests are submitted every 10 ms, the average latency +will be more than 10 ms. +.Pp +Also note, these delays are additive. +So two invocations of +.Fl D Ar 10 : Ns Ar 1 +are roughly equivalent to a single invocation of +.Fl D Ar 10 : Ns Ar 2 . +This also means, that one can specify multiple +lanes with differing target latencies. +For example, an invocation of +.Fl D Ar 10 : Ns Ar 1 +followed by +.Fl D Ar 25 : Ns Ar 2 +will create 3 lanes on the device: one lane with a latency +of 10 ms and two lanes with a 25 ms latency. +. +.It Xo +.Nm zinject +.Fl d Ar vdev +.Op Fl e Ar device_error +.Op Fl L Ar label_error +.Op Fl T Ar failure +.Op Fl f Ar frequency +.Op Fl F +.Ar pool +.Xc +Force a vdev error. +. +.It Xo +.Nm zinject +.Fl i Ar seconds +.Ar pool +.Xc +Add an artificial delay during the future import of a pool. +This injector is automatically cleared after the import is finished. +. +.It Xo +.Nm zinject +.Fl I +.Op Fl s Ar seconds Ns | Ns Fl g Ar txgs +.Ar pool +.Xc +Simulate a hardware failure that fails to honor a cache flush. +. +.It Xo +.Nm zinject +.Fl p Ar function +.Ar pool +.Xc +Panic inside the specified function. +. +.It Xo +.Nm zinject +.Fl t Sy data +.Fl C Ar dvas +.Op Fl e Ar device_error +.Op Fl f Ar frequency +.Op Fl l Ar level +.Op Fl r Ar range +.Op Fl amq +.Ar path +.Xc +Force an error into the contents of a file. +. +.It Xo +.Nm zinject +.Fl t Sy dnode +.Fl C Ar dvas +.Op Fl e Ar device_error +.Op Fl f Ar frequency +.Op Fl l Ar level +.Op Fl amq +.Ar path +.Xc +Force an error into the metadnode for a file or directory. +. +.It Xo +.Nm zinject +.Fl t Ar mos_type +.Fl C Ar dvas +.Op Fl e Ar device_error +.Op Fl f Ar frequency +.Op Fl l Ar level +.Op Fl r Ar range +.Op Fl amqu +.Ar pool +.Xc +Force an error into the MOS of a pool. +.El +.Sh OPTIONS +.Bl -tag -width "-C dvas" +.It Fl a +Flush the ARC before injection. +.It Fl b Ar objset : Ns Ar object : Ns Ar level : Ns Ar start : Ns Ar end +Force an error into the pool at this bookmark tuple. +Each number is in hexadecimal, and only one block can be specified. +.It Fl C Ar dvas +Inject the given error only into specific DVAs. +The mask should be specified as a list of 0-indexed DVAs separated by commas +.No (e.g . Ar 0,2 Ns No ). +This option is not applicable to logical data errors such as +.Sy decompress +and +.Sy decrypt . +.It Fl d Ar vdev +A vdev specified by path or GUID. +.It Fl e Ar device_error +Specify +.Bl -tag -compact -width "decompress" +.It Sy checksum +for an ECKSUM error, +.It Sy decompress +for a data decompression error, +.It Sy decrypt +for a data decryption error, +.It Sy corrupt +to flip a bit in the data after a read, +.It Sy dtl +for an ECHILD error, +.It Sy io +for an EIO error where reopening the device will succeed, +.It Sy nxio +for an ENXIO error where reopening the device will fail, or +.It Sy noop +to drop the IO without executing it, and return success. +.El +.Pp +For EIO and ENXIO, the "failed" reads or writes still occur. +The probe simply sets the error value reported by the I/O pipeline +so it appears the read or write failed. +Decryption errors only currently work with file data. +.It Fl f Ar frequency +Only inject errors a fraction of the time. +Expressed as a real number percentage between +.Sy 0.0001 +and +.Sy 100 . +.It Fl F +Fail faster. +Do fewer checks. +.It Fl f Ar txgs +Run for this many transaction groups before reporting failure. +.It Fl h +Print the usage message. +.It Fl l Ar level +Inject an error at a particular block level. +The default is +.Sy 0 . +.It Fl L Ar label_error +Set the label error region to one of +.Sy nvlist , +.Sy pad1 , +.Sy pad2 , +or +.Sy uber . +.It Fl m +Automatically remount the underlying filesystem. +.It Fl q +Quiet mode. +Only print the handler number added. +.It Fl r Ar range +Inject an error over a particular logical range of an object, which +will be translated to the appropriate blkid range according to the +object's properties. +.It Fl s Ar seconds +Run for this many seconds before reporting failure. +.It Fl T Ar type +Inject the error into I/O of this type. +.Bl -tag -compact -width "read, write, flush, claim, free" +.It Sy read , Sy write , Sy flush , Sy claim , Sy free +Fundamental I/O types +.It Sy all +All fundamental I/O types +.It Sy probe +Device probe I/O +.El +.It Fl t Ar mos_type +Set this to +.Bl -tag -compact -width "spacemap" +.It Sy mos +for any data in the MOS, +.It Sy mosdir +for an object directory, +.It Sy config +for the pool configuration, +.It Sy bpobj +for the block pointer list, +.It Sy spacemap +for the space map, +.It Sy metaslab +for the metaslab, or +.It Sy errlog +for the persistent error log. +.El +.It Fl u +Unload the pool after injection. +.El +. +.Sh ENVIRONMENT VARIABLES +.Bl -tag -width "ZF" +.It Ev ZFS_HOSTID +Run +.Nm +in debug mode. +.El +. +.Sh SEE ALSO +.Xr zfs 8 , +.Xr zpool 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-add.8 b/sys/contrib/openzfs/man/man8/zpool-add.8 new file mode 100644 index 000000000000..9d46fcf332e9 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-add.8 @@ -0,0 +1,139 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2024 by Delphix. All Rights Reserved. +.\" +.Dd March 8, 2024 +.Dt ZPOOL-ADD 8 +.Os +. +.Sh NAME +.Nm zpool-add +.Nd add vdevs to ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm add +.Op Fl fgLnP +.Op Fl -allow-in-use -allow-replication-mismatch -allow-ashift-mismatch +.Oo Fl o Ar property Ns = Ns Ar value Oc +.Ar pool vdev Ns … +. +.Sh DESCRIPTION +Adds the specified virtual devices to the given pool. +The +.Ar vdev +specification is described in the +.Em Virtual Devices +section of +.Xr zpoolconcepts 7 . +The behavior of the +.Fl f +option, and the device checks performed are described in the +.Nm zpool Cm create +subcommand. +.Bl -tag -width Ds +.It Fl f +Forces use of +.Ar vdev Ns s , +even if they appear in use, have conflicting ashift values, or specify +a conflicting replication level. +Not all devices can be overridden in this manner. +.It Fl g +Display +.Ar vdev , +GUIDs instead of the normal device names. +These GUIDs can be used in place of +device names for the zpool detach/offline/remove/replace commands. +.It Fl L +Display real paths for +.Ar vdev Ns s +resolving all symbolic links. +This can be used to look up the current block +device name regardless of the +.Pa /dev/disk +path used to open it. +.It Fl n +Displays the configuration that would be used without actually adding the +.Ar vdev Ns s . +The actual pool creation can still fail due to insufficient privileges or +device sharing. +.It Fl P +Display real paths for +.Ar vdev Ns s +instead of only the last component of the path. +This can be used in conjunction with the +.Fl L +flag. +.It Fl o Ar property Ns = Ns Ar value +Sets the given pool properties. +See the +.Xr zpoolprops 7 +manual page for a list of valid properties that can be set. +The only property supported at the moment is +.Sy ashift . +.It Fl -allow-ashift-mismatch +Disable the ashift validation which allows mismatched ashift values in the +pool. +Adding top-level +.Ar vdev Ns s +with different sector sizes will prohibit future device removal operations, see +.Xr zpool-remove 8 . +.It Fl -allow-in-use +Allow vdevs to be added even if they might be in use in another pool. +.It Fl -allow-replication-mismatch +Allow vdevs with conflicting replication levels to be added to the pool. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 5, 13 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Adding a Mirror to a ZFS Storage Pool +The following command adds two mirrored disks to the pool +.Ar tank , +assuming the pool is already made up of two-way mirrors. +The additional space is immediately available to any datasets within the pool. +.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb +. +.Ss Example 2 : No Adding Cache Devices to a ZFS Pool +The following command adds two disks for use as cache devices to a ZFS storage +pool: +.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd +.Pp +Once added, the cache devices gradually fill with content from main memory. +Depending on the size of your cache devices, it could take over an hour for +them to fill. +Capacity and reads can be monitored using the +.Cm iostat +subcommand as follows: +.Dl # Nm zpool Cm iostat Fl v Ar pool 5 +. +.Sh SEE ALSO +.Xr zpool-attach 8 , +.Xr zpool-import 8 , +.Xr zpool-initialize 8 , +.Xr zpool-online 8 , +.Xr zpool-remove 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-attach.8 b/sys/contrib/openzfs/man/man8/zpool-attach.8 new file mode 100644 index 000000000000..f120350a5190 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-attach.8 @@ -0,0 +1,142 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd November 8, 2023 +.Dt ZPOOL-ATTACH 8 +.Os +. +.Sh NAME +.Nm zpool-attach +.Nd attach new device to existing ZFS vdev +.Sh SYNOPSIS +.Nm zpool +.Cm attach +.Op Fl fsw +.Oo Fl o Ar property Ns = Ns Ar value Oc +.Ar pool device new_device +. +.Sh DESCRIPTION +Attaches +.Ar new_device +to the existing +.Ar device . +The behavior differs depending on if the existing +.Ar device +is a RAID-Z device, or a mirror/plain device. +.Pp +If the existing device is a mirror or plain device +.Pq e.g. specified as Qo Li sda Qc or Qq Li mirror-7 , +the new device will be mirrored with the existing device, a resilver will be +initiated, and the new device will contribute to additional redundancy once the +resilver completes. +If +.Ar device +is not currently part of a mirrored configuration, +.Ar device +automatically transforms into a two-way mirror of +.Ar device +and +.Ar new_device . +If +.Ar device +is part of a two-way mirror, attaching +.Ar new_device +creates a three-way mirror, and so on. +In either case, +.Ar new_device +begins to resilver immediately and any running scrub is canceled. +.Pp +If the existing device is a RAID-Z device +.Pq e.g. specified as Qq Ar raidz2-0 , +the new device will become part of that RAID-Z group. +A "raidz expansion" will be initiated, and once the expansion completes, +the new device will contribute additional space to the RAID-Z group. +The expansion entails reading all allocated space from existing disks in the +RAID-Z group, and rewriting it to the new disks in the RAID-Z group (including +the newly added +.Ar device ) . +Its progress can be monitored with +.Nm zpool Cm status . +.Pp +Data redundancy is maintained during and after the expansion. +If a disk fails while the expansion is in progress, the expansion pauses until +the health of the RAID-Z vdev is restored (e.g. by replacing the failed disk +and waiting for reconstruction to complete). +Expansion does not change the number of failures that can be tolerated +without data loss (e.g. a RAID-Z2 is still a RAID-Z2 even after expansion). +A RAID-Z vdev can be expanded multiple times. +.Pp +After the expansion completes, old blocks retain their old data-to-parity +ratio +.Pq e.g. 5-wide RAID-Z2 has 3 data and 2 parity +but distributed among the larger set of disks. +New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide +RAID-Z2 which has been expanded once to 6-wide, has 4 data and 2 parity). +However, the vdev's assumed parity ratio does not change, so slightly less +space than is expected may be reported for newly-written blocks, according to +.Nm zfs Cm list , +.Nm df , +.Nm ls Fl s , +and similar tools. +.Pp +A pool-wide scrub is initiated at the end of the expansion in order to verify +the checksums of all blocks which have been copied during the expansion. +.Bl -tag -width Ds +.It Fl f +Forces use of +.Ar new_device , +even if it appears to be in use. +Not all devices can be overridden in this manner. +.It Fl o Ar property Ns = Ns Ar value +Sets the given pool properties. +See the +.Xr zpoolprops 7 +manual page for a list of valid properties that can be set. +The only property supported at the moment is +.Sy ashift . +.It Fl s +When attaching to a mirror or plain device, the +.Ar new_device +is reconstructed sequentially to restore redundancy as quickly as possible. +Checksums are not verified during sequential reconstruction so a scrub is +started when the resilver completes. +.It Fl w +Waits until +.Ar new_device +has finished resilvering or expanding before returning. +.El +. +.Sh SEE ALSO +.Xr zpool-add 8 , +.Xr zpool-detach 8 , +.Xr zpool-import 8 , +.Xr zpool-initialize 8 , +.Xr zpool-online 8 , +.Xr zpool-replace 8 , +.Xr zpool-resilver 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-checkpoint.8 b/sys/contrib/openzfs/man/man8/zpool-checkpoint.8 new file mode 100644 index 000000000000..b654f669cfa2 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-checkpoint.8 @@ -0,0 +1,73 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-CHECKPOINT 8 +.Os +. +.Sh NAME +.Nm zpool-checkpoint +.Nd check-point current ZFS storage pool state +.Sh SYNOPSIS +.Nm zpool +.Cm checkpoint +.Op Fl d Op Fl w +.Ar pool +. +.Sh DESCRIPTION +Checkpoints the current state of +.Ar pool +, which can be later restored by +.Nm zpool Cm import --rewind-to-checkpoint . +The existence of a checkpoint in a pool prohibits the following +.Nm zpool +subcommands: +.Cm remove , attach , detach , split , No and Cm reguid . +In addition, it may break reservation boundaries if the pool lacks free +space. +The +.Nm zpool Cm status +command indicates the existence of a checkpoint or the progress of discarding a +checkpoint from a pool. +.Nm zpool Cm list +can be used to check how much space the checkpoint takes from the pool. +. +.Sh OPTIONS +.Bl -tag -width Ds +.It Fl d , -discard +Discards an existing checkpoint from +.Ar pool . +.It Fl w , -wait +Waits until the checkpoint has finished being discarded before returning. +.El +. +.Sh SEE ALSO +.Xr zfs-snapshot 8 , +.Xr zpool-import 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-clear.8 b/sys/contrib/openzfs/man/man8/zpool-clear.8 new file mode 100644 index 000000000000..70cd8325bd0e --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-clear.8 @@ -0,0 +1,72 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd April 29, 2024 +.Dt ZPOOL-CLEAR 8 +.Os +. +.Sh NAME +.Nm zpool-clear +.Nd clear device errors in ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm clear +.Op Fl -power +.Ar pool +.Oo Ar device Oc Ns … +. +.Sh DESCRIPTION +Clears device errors in a pool. +If no arguments are specified, all device errors within the pool are cleared. +If one or more devices is specified, only those errors associated with the +specified device or devices are cleared. +.Pp +If the pool was suspended it will be brought back online provided the +devices can be accessed. +Pools with +.Sy multihost +enabled which have been suspended cannot be resumed when there is evidence +that the pool was imported by another host. +The same checks performed during an import will be applied before the clear +proceeds. +.Bl -tag -width Ds +.It Fl -power +Power on the devices's slot in the storage enclosure and wait for the device +to show up before attempting to clear errors. +This is done on all the devices specified. +Alternatively, you can set the +.Sy ZPOOL_AUTO_POWER_ON_SLOT +environment variable to always enable this behavior. +Note: This flag currently works on Linux only. +.El +. +.Sh SEE ALSO +.Xr zdb 8 , +.Xr zpool-reopen 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-create.8 b/sys/contrib/openzfs/man/man8/zpool-create.8 new file mode 100644 index 000000000000..a36ae260a158 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-create.8 @@ -0,0 +1,245 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org> +.\" +.Dd July 11, 2022 +.Dt ZPOOL-CREATE 8 +.Os +. +.Sh NAME +.Nm zpool-create +.Nd create ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm create +.Op Fl dfn +.Op Fl m Ar mountpoint +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Oo Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value Oc +.Op Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns … +.Oo Fl O Ar file-system-property Ns = Ns Ar value Oc Ns … +.Op Fl R Ar root +.Op Fl t Ar tname +.Ar pool +.Ar vdev Ns … +. +.Sh DESCRIPTION +Creates a new storage pool containing the virtual devices specified on the +command line. +The pool name must begin with a letter, and can only contain +alphanumeric characters as well as the underscore +.Pq Qq Sy _ , +dash +.Pq Qq Sy \&- , +colon +.Pq Qq Sy \&: , +space +.Pq Qq Sy \&\ , +and period +.Pq Qq Sy \&. . +The pool names +.Sy mirror , +.Sy raidz , +.Sy draid , +.Sy spare +and +.Sy log +are reserved, as are names beginning with +.Sy mirror , +.Sy raidz , +.Sy draid , +and +.Sy spare . +The +.Ar vdev +specification is described in the +.Sx Virtual Devices +section of +.Xr zpoolconcepts 7 . +.Pp +The command attempts to verify that each device specified is accessible and not +currently in use by another subsystem. +However this check is not robust enough +to detect simultaneous attempts to use a new device in different pools, even if +.Sy multihost Ns = Sy enabled . +The administrator must ensure that simultaneous invocations of any combination +of +.Nm zpool Cm replace , +.Nm zpool Cm create , +.Nm zpool Cm add , +or +.Nm zpool Cm labelclear +do not refer to the same device. +Using the same device in two pools will result in pool corruption. +.Pp +There are some uses, such as being currently mounted, or specified as the +dedicated dump device, that prevents a device from ever being used by ZFS. +Other uses, such as having a preexisting UFS file system, can be overridden with +.Fl f . +.Pp +The command also checks that the replication strategy for the pool is +consistent. +An attempt to combine redundant and non-redundant storage in a single pool, +or to mix disks and files, results in an error unless +.Fl f +is specified. +The use of differently-sized devices within a single raidz or mirror group is +also flagged as an error unless +.Fl f +is specified. +.Pp +Unless the +.Fl R +option is specified, the default mount point is +.Pa / Ns Ar pool . +The mount point must not exist or must be empty, or else the root dataset +will not be able to be be mounted. +This can be overridden with the +.Fl m +option. +.Pp +By default all supported features are enabled on the new pool. +The +.Fl d +option and the +.Fl o Ar compatibility +property +.Pq e.g Fl o Sy compatibility Ns = Ns Ar 2020 +can be used to restrict the features that are enabled, so that the +pool can be imported on other releases of ZFS. +.Bl -tag -width "-t tname" +.It Fl d +Do not enable any features on the new pool. +Individual features can be enabled by setting their corresponding properties to +.Sy enabled +with +.Fl o . +See +.Xr zpool-features 7 +for details about feature properties. +.It Fl f +Forces use of +.Ar vdev Ns s , +even if they appear in use or specify a conflicting replication level. +Not all devices can be overridden in this manner. +.It Fl m Ar mountpoint +Sets the mount point for the root dataset. +The default mount point is +.Pa /pool +or +.Pa altroot/pool +if +.Sy altroot +is specified. +The mount point must be an absolute path, +.Sy legacy , +or +.Sy none . +For more information on dataset mount points, see +.Xr zfsprops 7 . +.It Fl n +Displays the configuration that would be used without actually creating the +pool. +The actual pool creation can still fail due to insufficient privileges or +device sharing. +.It Fl o Ar property Ns = Ns Ar value +Sets the given pool properties. +See +.Xr zpoolprops 7 +for a list of valid properties that can be set. +.It Fl o Ar compatibility Ns = Ns Sy off Ns | Ns Sy legacy Ns | Ns Ar file Ns Oo , Ns Ar file Oc Ns … +Specifies compatibility feature sets. +See +.Xr zpool-features 7 +for more information about compatibility feature sets. +.It Fl o Sy feature@ Ns Ar feature Ns = Ns Ar value +Sets the given pool feature. +See the +.Xr zpool-features 7 +section for a list of valid features that can be set. +Value can be either disabled or enabled. +.It Fl O Ar file-system-property Ns = Ns Ar value +Sets the given file system properties in the root file system of the pool. +See +.Xr zfsprops 7 +for a list of valid properties that can be set. +.It Fl R Ar root +Equivalent to +.Fl o Sy cachefile Ns = Ns Sy none Fl o Sy altroot Ns = Ns Ar root +.It Fl t Ar tname +Sets the in-core pool name to +.Ar tname +while the on-disk name will be the name specified as +.Ar pool . +This will set the default of the +.Sy cachefile +property to +.Sy none . +This is intended +to handle name space collisions when creating pools for other systems, +such as virtual machines or physical machines whose pools live on network +block devices. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 1, 2, 3, 4, 11, 12 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Creating a RAID-Z Storage Pool +The following command creates a pool with a single raidz root vdev that +consists of six disks: +.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf +. +.Ss Example 2 : No Creating a Mirrored Storage Pool +The following command creates a pool with two mirrors, where each mirror +contains two disks: +.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd +. +.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions +The following command creates a non-redundant pool using two disk partitions: +.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2 +. +.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files +The following command creates a non-redundant pool using files. +While not recommended, a pool based on files can be useful for experimental +purposes. +.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b +. +.Ss Example 5 : No Managing Hot Spares +The following command creates a new pool with an available hot spare: +.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc +. +.Ss Example 6 : No Creating a ZFS Pool with Mirrored Separate Intent Logs +The following command creates a ZFS storage pool consisting of two, two-way +mirrors and mirrored log devices: +.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf +. +.Sh SEE ALSO +.Xr zpool-destroy 8 , +.Xr zpool-export 8 , +.Xr zpool-import 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-ddtprune.8 b/sys/contrib/openzfs/man/man8/zpool-ddtprune.8 new file mode 100644 index 000000000000..2103d274d0fb --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-ddtprune.8 @@ -0,0 +1,49 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or http://www.opensolaris.org/os/licensing. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" +.\" Copyright (c) 2024, Klara Inc. +.\" +.Dd June 17, 2024 +.Dt ZPOOL-DDTPRUNE 8 +.Os +. +.Sh NAME +.Nm zpool-ddtprune +.Nd Prunes the oldest entries from the single reference dedup table(s) +.Sh SYNOPSIS +.Nm zpool +.Cm ddtprune +.Fl d Ar days | Fl p Ar percentage +.Ar pool +.Sh DESCRIPTION +This command prunes older unique entries from the dedup table. +As a complement to the dedup quota feature, +.Sy ddtprune +allows removal of older non-duplicate entries to make room for +newer duplicate entries. +.Pp +The amount to prune can be based on a target percentage of the unique entries +or based on the age (i.e., every unique entry older than N days). +. +.Sh SEE ALSO +.Xr zdb 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-destroy.8 b/sys/contrib/openzfs/man/man8/zpool-destroy.8 new file mode 100644 index 000000000000..82f3f3e203d6 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-destroy.8 @@ -0,0 +1,58 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-DESTROY 8 +.Os +. +.Sh NAME +.Nm zpool-destroy +.Nd destroy ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm destroy +.Op Fl f +.Ar pool +. +.Sh DESCRIPTION +Destroys the given pool, freeing up any devices for other use. +This command tries to unmount any active datasets before destroying the pool. +.Bl -tag -width Ds +.It Fl f +Forcefully unmount all active datasets. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 7 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Destroying a ZFS Storage Pool +The following command destroys the pool +.Ar tank +and any datasets contained within: +.Dl # Nm zpool Cm destroy Fl f Ar tank diff --git a/sys/contrib/openzfs/man/man8/zpool-detach.8 b/sys/contrib/openzfs/man/man8/zpool-detach.8 new file mode 100644 index 000000000000..79a44310110d --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-detach.8 @@ -0,0 +1,59 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-DETACH 8 +.Os +. +.Sh NAME +.Nm zpool-detach +.Nd detach device from ZFS mirror +.Sh SYNOPSIS +.Nm zpool +.Cm detach +.Ar pool device +. +.Sh DESCRIPTION +Detaches +.Ar device +from a mirror. +The operation is refused if there are no other valid replicas of the data. +If +.Ar device +may be re-added to the pool later on then consider the +.Nm zpool Cm offline +command instead. +. +.Sh SEE ALSO +.Xr zpool-attach 8 , +.Xr zpool-labelclear 8 , +.Xr zpool-offline 8 , +.Xr zpool-remove 8 , +.Xr zpool-replace 8 , +.Xr zpool-split 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-events.8 b/sys/contrib/openzfs/man/man8/zpool-events.8 new file mode 100644 index 000000000000..36a9864dc73b --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-events.8 @@ -0,0 +1,548 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2024, 2025, Klara, Inc. +.\" +.Dd July 3, 2025 +.Dt ZPOOL-EVENTS 8 +.Os +. +.Sh NAME +.Nm zpool-events +.Nd list recent events generated by kernel +.Sh SYNOPSIS +.Nm zpool +.Cm events +.Op Fl vHf +.Op Ar pool +.Nm zpool +.Cm events +.Fl c +. +.Sh DESCRIPTION +Lists all recent events generated by the ZFS kernel modules. +These events are consumed by the +.Xr zed 8 +and used to automate administrative tasks such as replacing a failed device +with a hot spare. +For more information about the subclasses and event payloads +that can be generated see +.Sx EVENTS +and the following sections. +. +.Sh OPTIONS +.Bl -tag -compact -width Ds +.It Fl c +Clear all previous events. +.It Fl f +Follow mode. +.It Fl H +Scripted mode. +Do not display headers, and separate fields by a +single tab instead of arbitrary space. +.It Fl v +Print the entire payload for each event. +.El +. +.Sh EVENTS +These are the different event subclasses. +The full event name would be +.Sy ereport.fs.zfs.\& Ns Em SUBCLASS , +but only the last part is listed here. +.Pp +.Bl -tag -compact -width "vdev.bad_guid_sum" +.It Sy checksum +Issued when a checksum error has been detected. +.It Sy io +Issued when there is an I/O error in a vdev in the pool. +.It Sy data +Issued when there have been data errors in the pool. +.It Sy deadman +Issued when an I/O request is determined to be "hung", this can be caused +by lost completion events due to flaky hardware or drivers. +See +.Sy zfs_deadman_failmode +in +.Xr zfs 4 +for additional information regarding "hung" I/O detection and configuration. +.It Sy delay +Issued when a completed I/O request exceeds the maximum allowed time +specified by the +.Sy zio_slow_io_ms +module parameter. +This can be an indicator of problems with the underlying storage device. +The number of delay events is ratelimited by the +.Sy zfs_slow_io_events_per_second +module parameter. +.It Sy dio_verify_rd +Issued when there was a checksum verify error after a Direct I/O read has been +issued. +.It Sy dio_verify_wr +Issued when there was a checksum verify error after a Direct I/O write has been +issued. +This event can only take place if the module parameter +.Sy zfs_vdev_direct_write_verify +is not set to zero. +See +.Xr zfs 4 +for more details on the +.Sy zfs_vdev_direct_write_verify +module parameter. +.It Sy config +Issued every time a vdev change have been done to the pool. +.It Sy zpool +Issued when a pool cannot be imported. +.It Sy zpool.destroy +Issued when a pool is destroyed. +.It Sy zpool.export +Issued when a pool is exported. +.It Sy zpool.import +Issued when a pool is imported. +.It Sy zpool.reguid +Issued when a REGUID (new unique identifier for the pool have been regenerated) +have been detected. +.It Sy vdev.unknown +Issued when the vdev is unknown. +Such as trying to clear device errors on a vdev that have failed/been kicked +from the system/pool and is no longer available. +.It Sy vdev.open_failed +Issued when a vdev could not be opened (because it didn't exist for example). +.It Sy vdev.corrupt_data +Issued when corrupt data have been detected on a vdev. +.It Sy vdev.no_replicas +Issued when there are no more replicas to sustain the pool. +This would lead to the pool being +.Em DEGRADED . +.It Sy vdev.bad_guid_sum +Issued when a missing device in the pool have been detected. +.It Sy vdev.too_small +Issued when the system (kernel) have removed a device, and ZFS +notices that the device isn't there any more. +This is usually followed by a +.Sy probe_failure +event. +.It Sy vdev.bad_label +Issued when the label is OK but invalid. +.It Sy vdev.bad_ashift +Issued when the ashift alignment requirement has increased. +.It Sy vdev.remove +Issued when a vdev is detached from a mirror (or a spare detached from a +vdev where it have been used to replace a failed drive - only works if +the original drive have been re-added). +.It Sy vdev.clear +Issued when clearing device errors in a pool. +Such as running +.Nm zpool Cm clear +on a device in the pool. +.It Sy vdev.check +Issued when a check to see if a given vdev could be opened is started. +.It Sy vdev.spare +Issued when a spare have kicked in to replace a failed device. +.It Sy vdev.autoexpand +Issued when a vdev can be automatically expanded. +.It Sy io_failure +Issued when there is an I/O failure in a vdev in the pool. +.It Sy probe_failure +Issued when a probe fails on a vdev. +This would occur if a vdev +have been kicked from the system outside of ZFS (such as the kernel +have removed the device). +.It Sy log_replay +Issued when the intent log cannot be replayed. +The can occur in the case of a missing or damaged log device. +.It Sy resilver.start +Issued when a resilver is started. +.It Sy resilver.finish +Issued when the running resilver have finished. +.It Sy scrub.start +Issued when a scrub is started on a pool. +.It Sy scrub.finish +Issued when a pool has finished scrubbing. +.It Sy scrub.abort +Issued when a scrub is aborted on a pool. +.It Sy scrub.resume +Issued when a scrub is resumed on a pool. +.It Sy scrub.paused +Issued when a scrub is paused on a pool. +.It Sy bootfs.vdev.attach +.It Sy sitout +Issued when a +.Sy RAIDZ +or +.Sy DRAID +vdev triggers the +.Sy autosit +logic. +This logic detects when a disk in such a vdev is significantly slower than its +peers, and sits them out temporarily to preserve the performance of the pool. +.El +. +.Sh PAYLOADS +This is the payload (data, information) that accompanies an +event. +.Pp +For +.Xr zed 8 , +these are set to uppercase and prefixed with +.Sy ZEVENT_ . +.Pp +.Bl -tag -compact -width "vdev_cksum_errors" +.It Sy pool +Pool name. +.It Sy pool_failmode +Failmode - +.Sy wait , +.Sy continue , +or +.Sy panic . +See the +.Sy failmode +property in +.Xr zpoolprops 7 +for more information. +.It Sy pool_guid +The GUID of the pool. +.It Sy pool_context +The load state for the pool (0=none, 1=open, 2=import, 3=tryimport, 4=recover +5=error). +.It Sy vdev_guid +The GUID of the vdev in question (the vdev failing or operated upon with +.Nm zpool Cm clear , +etc.). +.It Sy vdev_type +Type of vdev - +.Sy disk , +.Sy file , +.Sy mirror , +etc. +See the +.Sy Virtual Devices +section of +.Xr zpoolconcepts 7 +for more information on possible values. +.It Sy vdev_path +Full path of the vdev, including any +.Em -partX . +.It Sy vdev_devid +ID of vdev (if any). +.It Sy vdev_fru +Physical FRU location. +.It Sy vdev_state +State of vdev (0=uninitialized, 1=closed, 2=offline, 3=removed, 4=failed to +open, 5=faulted, 6=degraded, 7=healthy). +.It Sy vdev_ashift +The ashift value of the vdev. +.It Sy vdev_complete_ts +The time the last I/O request completed for the specified vdev. +.It Sy vdev_delta_ts +The time since the last I/O request completed for the specified vdev. +.It Sy vdev_spare_paths +List of spares, including full path and any +.Em -partX . +.It Sy vdev_spare_guids +GUID(s) of spares. +.It Sy vdev_read_errors +How many read errors that have been detected on the vdev. +.It Sy vdev_write_errors +How many write errors that have been detected on the vdev. +.It Sy vdev_cksum_errors +How many checksum errors that have been detected on the vdev. +.It Sy parent_guid +GUID of the vdev parent. +.It Sy parent_type +Type of parent. +See +.Sy vdev_type . +.It Sy parent_path +Path of the vdev parent (if any). +.It Sy parent_devid +ID of the vdev parent (if any). +.It Sy zio_objset +The object set number for a given I/O request. +.It Sy zio_object +The object number for a given I/O request. +.It Sy zio_level +The indirect level for the block. +Level 0 is the lowest level and includes data blocks. +Values > 0 indicate metadata blocks at the appropriate level. +.It Sy zio_blkid +The block ID for a given I/O request. +.It Sy zio_err +The error number for a failure when handling a given I/O request, +compatible with +.Xr errno 3 +with the value of +.Sy EBADE +used to indicate a ZFS checksum error. +.It Sy zio_offset +The offset in bytes of where to write the I/O request for the specified vdev. +.It Sy zio_size +The size in bytes of the I/O request. +.It Sy zio_flags +The current flags describing how the I/O request should be handled. +See the +.Sy I/O FLAGS +section for the full list of I/O flags. +.It Sy zio_stage +The current stage of the I/O in the pipeline. +See the +.Sy I/O STAGES +section for a full list of all the I/O stages. +.It Sy zio_pipeline +The valid pipeline stages for the I/O. +See the +.Sy I/O STAGES +section for a full list of all the I/O stages. +.It Sy zio_priority +The queue priority of the I/O request. +See the +.Sy I/O PRIORITIES +section for a full list of all the I/O priorities. +.It Sy zio_tyoe +The type of the I/O request. +See the +.Sy I/O TYPES +section for a full list of all the I/O types. +.It Sy zio_delay +The time elapsed (in nanoseconds) waiting for the block layer to complete the +I/O request. +Unlike +.Sy zio_delta , +this does not include any vdev queuing time and is +therefore solely a measure of the block layer performance. +.It Sy zio_timestamp +The time when a given I/O request was submitted. +.It Sy zio_delta +The time required to service a given I/O request. +.It Sy prev_state +The previous state of the vdev. +.It Sy cksum_algorithm +Checksum algorithm used. +See +.Xr zfsprops 7 +for more information on the available checksum algorithms. +.It Sy cksum_byteswap +Whether or not the data is byteswapped. +.It Sy bad_ranges +.No [\& Ns Ar start , end ) +pairs of corruption offsets. +Offsets are always aligned on a 64-bit boundary, +and can include some gaps of non-corruption. +(See +.Sy bad_ranges_min_gap ) +.It Sy bad_ranges_min_gap +In order to bound the size of the +.Sy bad_ranges +array, gaps of non-corruption +less than or equal to +.Sy bad_ranges_min_gap +bytes have been merged with +adjacent corruption. +Always at least 8 bytes, since corruption is detected on a 64-bit word basis. +.It Sy bad_range_sets +This array has one element per range in +.Sy bad_ranges . +Each element contains +the count of bits in that range which were clear in the good data and set +in the bad data. +.It Sy bad_range_clears +This array has one element per range in +.Sy bad_ranges . +Each element contains +the count of bits for that range which were set in the good data and clear in +the bad data. +.It Sy bad_set_bits +If this field exists, it is an array of +.Pq Ar bad data No & ~( Ns Ar good data ) ; +that is, the bits set in the bad data which are cleared in the good data. +Each element corresponds a byte whose offset is in a range in +.Sy bad_ranges , +and the array is ordered by offset. +Thus, the first element is the first byte in the first +.Sy bad_ranges +range, and the last element is the last byte in the last +.Sy bad_ranges +range. +.It Sy bad_cleared_bits +Like +.Sy bad_set_bits , +but contains +.Pq Ar good data No & ~( Ns Ar bad data ) ; +that is, the bits set in the good data which are cleared in the bad data. +.El +. +.Sh I/O STAGES +The ZFS I/O pipeline is comprised of various stages which are defined below. +The individual stages are used to construct these basic I/O +operations: Read, Write, Free, Claim, Flush and Trim. +These stages may be +set on an event to describe the life cycle of a given I/O request. +.Pp +.TS +tab(:); +l l l . +Stage:Bit Mask:Operations +_:_:_ +ZIO_STAGE_OPEN:0x00000001:RWFCXT + +ZIO_STAGE_READ_BP_INIT:0x00000002:R----- +ZIO_STAGE_WRITE_BP_INIT:0x00000004:-W---- +ZIO_STAGE_FREE_BP_INIT:0x00000008:--F--- +ZIO_STAGE_ISSUE_ASYNC:0x00000010:-WF--T +ZIO_STAGE_WRITE_COMPRESS:0x00000020:-W---- + +ZIO_STAGE_ENCRYPT:0x00000040:-W---- +ZIO_STAGE_CHECKSUM_GENERATE:0x00000080:-W---- + +ZIO_STAGE_NOP_WRITE:0x00000100:-W---- + +ZIO_STAGE_BRT_FREE:0x00000200:--F--- + +ZIO_STAGE_DDT_READ_START:0x00000400:R----- +ZIO_STAGE_DDT_READ_DONE:0x00000800:R----- +ZIO_STAGE_DDT_WRITE:0x00001000:-W---- +ZIO_STAGE_DDT_FREE:0x00002000:--F--- + +ZIO_STAGE_GANG_ASSEMBLE:0x00004000:RWFC-- +ZIO_STAGE_GANG_ISSUE:0x00008000:RWFC-- + +ZIO_STAGE_DVA_THROTTLE:0x00010000:-W---- +ZIO_STAGE_DVA_ALLOCATE:0x00020000:-W---- +ZIO_STAGE_DVA_FREE:0x00040000:--F--- +ZIO_STAGE_DVA_CLAIM:0x00080000:---C-- + +ZIO_STAGE_READY:0x00100000:RWFCIT + +ZIO_STAGE_VDEV_IO_START:0x00200000:RW--XT +ZIO_STAGE_VDEV_IO_DONE:0x00400000:RW--XT +ZIO_STAGE_VDEV_IO_ASSESS:0x00800000:RW--XT + +ZIO_STAGE_CHECKSUM_VERIFY:0x01000000:R----- +ZIO_STAGE_DIO_CHECKSUM_VERIFY:0x02000000:-W---- + +ZIO_STAGE_DONE:0x04000000:RWFCXT +.TE +. +.Sh I/O FLAGS +Every I/O request in the pipeline contains a set of flags which describe its +function and are used to govern its behavior. +These flags will be set in an event as a +.Sy zio_flags +payload entry. +.Pp +.TS +tab(:); +l l . +Flag:Bit Mask +_:_ +ZIO_FLAG_DONT_AGGREGATE:0x00000001 +ZIO_FLAG_IO_REPAIR:0x00000002 +ZIO_FLAG_SELF_HEAL:0x00000004 +ZIO_FLAG_RESILVER:0x00000008 +ZIO_FLAG_SCRUB:0x00000010 +ZIO_FLAG_SCAN_THREAD:0x00000020 +ZIO_FLAG_PHYSICAL:0x00000040 + +ZIO_FLAG_CANFAIL:0x00000080 +ZIO_FLAG_SPECULATIVE:0x00000100 +ZIO_FLAG_CONFIG_WRITER:0x00000200 +ZIO_FLAG_DONT_RETRY:0x00000400 +ZIO_FLAG_NODATA:0x00001000 +ZIO_FLAG_INDUCE_DAMAGE:0x00002000 + +ZIO_FLAG_ALLOC_THROTTLED:0x00004000 +ZIO_FLAG_IO_RETRY:0x00008000 +ZIO_FLAG_PROBE:0x00010000 +ZIO_FLAG_TRYHARD:0x00020000 +ZIO_FLAG_OPTIONAL:0x00040000 + +ZIO_FLAG_DONT_QUEUE:0x00080000 +ZIO_FLAG_DONT_PROPAGATE:0x00100000 +ZIO_FLAG_IO_BYPASS:0x00200000 +ZIO_FLAG_IO_REWRITE:0x00400000 +ZIO_FLAG_RAW_COMPRESS:0x00800000 +ZIO_FLAG_RAW_ENCRYPT:0x01000000 + +ZIO_FLAG_GANG_CHILD:0x02000000 +ZIO_FLAG_DDT_CHILD:0x04000000 +ZIO_FLAG_GODFATHER:0x08000000 +ZIO_FLAG_NOPWRITE:0x10000000 +ZIO_FLAG_REEXECUTED:0x20000000 +ZIO_FLAG_DELEGATED:0x40000000 +ZIO_FLAG_FASTWRITE:0x80000000 +.TE +. +.Sh I/O TYPES +Every I/O request in the pipeline has a single type value. +This value describes the kind of low-level work the I/O represents. +This value will be set in an event as a +.Sy zio_type +payload entry. +.Pp +.TS +tab(:); +l l l . +Type:Value:Description +_:_:_ +ZIO_TYPE_NULL:0x0:internal I/O sync point +ZIO_TYPE_READ:0x1:data read +ZIO_TYPE_WRITE:0x2:data write +ZIO_TYPE_FREE:0x3:block free +ZIO_TYPE_CLAIM:0x4:block claim (ZIL replay) +ZIO_TYPE_FLUSH:0x5:disk cache flush request +ZIO_TYPE_TRIM:0x6:trim (discard) +.TE +. +.Sh I/O PRIORITIES +Every I/O request in the pipeline has a single priority value. +This value is used by the queuing code to decide which I/O to issue next. +This value will be set in an event as a +.Sy zio_priority +payload entry. +.Pp +.TS +tab(:); +l l l . +Type:Value:Description +_:_:_ +ZIO_PRIORITY_SYNC_READ:0x0: +ZIO_PRIORITY_SYNC_WRITE:0x1:ZIL +ZIO_PRIORITY_ASYNC_READ:0x2:prefetch +ZIO_PRIORITY_ASYNC_WRITE:0x3:spa_sync() +ZIO_PRIORITY_SCRUB:0x4:asynchronous scrub/resilver reads +ZIO_PRIORITY_REMOVAL:0x5:reads/writes for vdev removal +ZIO_PRIORITY_INITIALIZING:0x6:initializing I/O +ZIO_PRIORITY_TRIM:0x7:trim I/O (discard) +ZIO_PRIORITY_REBUILD:0x8:reads/writes for vdev rebuild +ZIO_PRIORITY_NOW:0xa:non-queued i/os (e.g. free) +.TE +. +.Sh SEE ALSO +.Xr zfs 4 , +.Xr zed 8 , +.Xr zpool-wait 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-export.8 b/sys/contrib/openzfs/man/man8/zpool-export.8 new file mode 100644 index 000000000000..02495c088f94 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-export.8 @@ -0,0 +1,83 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-EXPORT 8 +.Os +. +.Sh NAME +.Nm zpool-export +.Nd export ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm export +.Op Fl f +.Fl a Ns | Ns Ar pool Ns … +. +.Sh DESCRIPTION +Exports the given pools from the system. +All devices are marked as exported, but are still considered in use by other +subsystems. +The devices can be moved between systems +.Pq even those of different endianness +and imported as long as a sufficient number of devices are present. +.Pp +Before exporting the pool, all datasets within the pool are unmounted. +A pool can not be exported if it has a shared spare that is currently being +used. +.Pp +For pools to be portable, you must give the +.Nm zpool +command whole disks, not just partitions, so that ZFS can label the disks with +portable EFI labels. +Otherwise, disk drivers on platforms of different endianness will not recognize +the disks. +.Bl -tag -width Ds +.It Fl a +Exports all pools imported on the system. +.It Fl f +Forcefully unmount all datasets, and allow export of pools with active shared +spares. +.Pp +This command will forcefully export the pool even if it has a shared spare that +is currently being used. +This may lead to potential data corruption. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 8 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Exporting a ZFS Storage Pool +The following command exports the devices in pool +.Ar tank +so that they can be relocated or later imported: +.Dl # Nm zpool Cm export Ar tank +. +.Sh SEE ALSO +.Xr zpool-import 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-get.8 b/sys/contrib/openzfs/man/man8/zpool-get.8 new file mode 100644 index 000000000000..bfe1bae7619f --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-get.8 @@ -0,0 +1,205 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd October 12, 2024 +.Dt ZPOOL-GET 8 +.Os +. +.Sh NAME +.Nm zpool-get +.Nd retrieve properties of ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm get +.Op Fl Hp +.Op Fl j Op Ar --json-int, --json-pool-key-guid +.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns … +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns … +.Oo Ar pool Oc Ns … +. +.Nm zpool +.Cm get +.Op Fl Hp +.Op Fl j Op Ar --json-int +.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns … +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns … +.Ar pool +.Oo Sy all-vdevs Ns | Ns +.Ar vdev Oc Ns … +. +.Nm zpool +.Cm set +.Ar property Ns = Ns Ar value +.Ar pool +. +.Nm zpool +.Cm set +.Ar property Ns = Ns Ar value +.Ar pool +.Ar vdev +. +.Sh DESCRIPTION +.Bl -tag -width Ds +.It Xo +.Nm zpool +.Cm get +.Op Fl Hp +.Op Fl j Op Ar --json-int, --json-pool-key-guid +.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns … +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns … +.Oo Ar pool Oc Ns … +.Xc +Retrieves the given list of properties +.Po +or all properties if +.Sy all +is used +.Pc +for the specified storage pool(s). +These properties are displayed with the following fields: +.Bl -tag -compact -offset Ds -width "property" +.It Sy name +Name of storage pool. +.It Sy property +Property name. +.It Sy value +Property value. +.It Sy source +Property source, either +.Sy default No or Sy local . +.El +.Pp +See the +.Xr zpoolprops 7 +manual page for more information on the available pool properties. +.Bl -tag -compact -offset Ds -width "-o field" +.It Fl j , -json Op Ar --json-int, --json-pool-key-guid +Display the list of properties in JSON format. +Specify +.Sy --json-int +to display the numbers in integer format instead of strings in JSON output. +Specify +.Sy --json-pool-key-guid +to set pool GUID as key for pool objects instead of pool name. +.It Fl H +Scripted mode. +Do not display headers, and separate fields by a single tab instead of arbitrary +space. +.It Fl o Ar field +A comma-separated list of columns to display, defaults to +.Sy name , Ns Sy property , Ns Sy value , Ns Sy source . +.It Fl p +Display numbers in parsable (exact) values. +.El +.It Xo +.Nm zpool +.Cm get +.Op Fl j Op Ar --json-int +.Op Fl Hp +.Op Fl o Ar field Ns Oo , Ns Ar field Oc Ns … +.Sy all Ns | Ns Ar property Ns Oo , Ns Ar property Oc Ns … +.Ar pool +.Oo Sy all-vdevs Ns | Ns +.Ar vdev Oc Ns … +.Xc +Retrieves the given list of properties +.Po +or all properties if +.Sy all +is used +.Pc +for the specified vdevs +.Po +or all vdevs if +.Sy all-vdevs +is used +.Pc +in the specified pool. +These properties are displayed with the following fields: +.Bl -tag -compact -offset Ds -width "property" +.It Sy name +Name of vdev. +.It Sy property +Property name. +.It Sy value +Property value. +.It Sy source +Property source, either +.Sy default No or Sy local . +.El +.Pp +See the +.Xr vdevprops 7 +manual page for more information on the available pool properties. +.Bl -tag -compact -offset Ds -width "-o field" +.It Fl j , -json Op Ar --json-int +Display the list of properties in JSON format. +Specify +.Sy --json-int +to display the numbers in integer format instead of strings in JSON output. +.It Fl H +Scripted mode. +Do not display headers, and separate fields by a single tab instead of arbitrary +space. +.It Fl o Ar field +A comma-separated list of columns to display, defaults to +.Sy name , Ns Sy property , Ns Sy value , Ns Sy source . +.It Fl p +Display numbers in parsable (exact) values. +.El +.It Xo +.Nm zpool +.Cm set +.Ar property Ns = Ns Ar value +.Ar pool +.Xc +Sets the given property on the specified pool. +See the +.Xr zpoolprops 7 +manual page for more information on what properties can be set and acceptable +values. +.It Xo +.Nm zpool +.Cm set +.Ar property Ns = Ns Ar value +.Ar pool +.Ar vdev +.Xc +Sets the given property on the specified vdev in the specified pool. +See the +.Xr vdevprops 7 +manual page for more information on what properties can be set and acceptable +values. +.El +. +.Sh SEE ALSO +.Xr vdevprops 7 , +.Xr zpool-features 7 , +.Xr zpoolprops 7 , +.Xr zpool-list 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-history.8 b/sys/contrib/openzfs/man/man8/zpool-history.8 new file mode 100644 index 000000000000..f02168951ff2 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-history.8 @@ -0,0 +1,59 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-HISTORY 8 +.Os +. +.Sh NAME +.Nm zpool-history +.Nd inspect command history of ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm history +.Op Fl il +.Oo Ar pool Oc Ns … +. +.Sh DESCRIPTION +Displays the command history of the specified pool(s) or all pools if no pool is +specified. +.Bl -tag -width Ds +.It Fl i +Displays internally logged ZFS events in addition to user initiated events. +.It Fl l +Displays log records in long format, which in addition to standard format +includes, the user name, the hostname, and the zone in which the operation was +performed. +.El +. +.Sh SEE ALSO +.Xr zpool-checkpoint 8 , +.Xr zpool-events 8 , +.Xr zpool-status 8 , +.Xr zpool-wait 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-import.8 b/sys/contrib/openzfs/man/man8/zpool-import.8 new file mode 100644 index 000000000000..c6d5f222b6b2 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-import.8 @@ -0,0 +1,436 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-IMPORT 8 +.Os +. +.Sh NAME +.Nm zpool-import +.Nd import ZFS storage pools or list available pools +.Sh SYNOPSIS +.Nm zpool +.Cm import +.Op Fl D +.Oo Fl d Ar dir Ns | Ns Ar device Oc Ns … +.Nm zpool +.Cm import +.Fl a +.Op Fl DflmN +.Op Fl F Op Fl nTX +.Op Fl -rewind-to-checkpoint +.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device +.Op Fl o Ar mntopts +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Op Fl R Ar root +.Nm zpool +.Cm import +.Op Fl Dflmt +.Op Fl F Op Fl nTX +.Op Fl -rewind-to-checkpoint +.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device +.Op Fl o Ar mntopts +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Op Fl R Ar root +.Op Fl s +.Ar pool Ns | Ns Ar id +.Op Ar newpool +. +.Sh DESCRIPTION +.Bl -tag -width Ds +.It Xo +.Nm zpool +.Cm import +.Op Fl D +.Oo Fl d Ar dir Ns | Ns Ar device Oc Ns … +.Xc +Lists pools available to import. +If the +.Fl d +or +.Fl c +options are not specified, this command searches for devices using libblkid +on Linux and geom on +.Fx . +The +.Fl d +option can be specified multiple times, and all directories are searched. +If the device appears to be part of an exported pool, this command displays a +summary of the pool with the name of the pool, a numeric identifier, as well as +the vdev layout and current health of the device for each device or file. +Destroyed pools, pools that were previously destroyed with the +.Nm zpool Cm destroy +command, are not listed unless the +.Fl D +option is specified. +.Pp +The numeric identifier is unique, and can be used instead of the pool name when +multiple exported pools of the same name are available. +.Bl -tag -width Ds +.It Fl c Ar cachefile +Reads configuration from the given +.Ar cachefile +that was created with the +.Sy cachefile +pool property. +This +.Ar cachefile +is used instead of searching for devices. +.It Fl d Ar dir Ns | Ns Ar device +Uses +.Ar device +or searches for devices or files in +.Ar dir . +The +.Fl d +option can be specified multiple times. +.It Fl D +Lists destroyed pools only. +.El +.It Xo +.Nm zpool +.Cm import +.Fl a +.Op Fl DflmN +.Op Fl F Op Fl nTX +.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device +.Op Fl o Ar mntopts +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Op Fl R Ar root +.Op Fl s +.Xc +Imports all pools found in the search directories. +Identical to the previous command, except that all pools with a sufficient +number of devices available are imported. +Destroyed pools, pools that were previously destroyed with the +.Nm zpool Cm destroy +command, will not be imported unless the +.Fl D +option is specified. +.Bl -tag -width Ds +.It Fl a +Searches for and imports all pools found. +.It Fl c Ar cachefile +Reads configuration from the given +.Ar cachefile +that was created with the +.Sy cachefile +pool property. +This +.Ar cachefile +is used instead of searching for devices. +.It Fl d Ar dir Ns | Ns Ar device +Uses +.Ar device +or searches for devices or files in +.Ar dir . +The +.Fl d +option can be specified multiple times. +This option is incompatible with the +.Fl c +option. +.It Fl D +Imports destroyed pools only. +The +.Fl f +option is also required. +.It Fl f +Forces import, even if the pool appears to be potentially active. +.It Fl F +Recovery mode for a non-importable pool. +Attempt to return the pool to an importable state by discarding the last few +transactions. +Not all damaged pools can be recovered by using this option. +If successful, the data from the discarded transactions is irretrievably lost. +This option is ignored if the pool is importable or already imported. +.It Fl l +Indicates that this command will request encryption keys for all encrypted +datasets it attempts to mount as it is bringing the pool online. +Note that if any datasets have a +.Sy keylocation +of +.Sy prompt +this command will block waiting for the keys to be entered. +Without this flag +encrypted datasets will be left unavailable until the keys are loaded. +.It Fl m +Allows a pool to import when there is a missing log device. +Recent transactions can be lost because the log device will be discarded. +.It Fl n +Used with the +.Fl F +recovery option. +Determines whether a non-importable pool can be made importable again, but does +not actually perform the pool recovery. +For more details about pool recovery mode, see the +.Fl F +option, above. +.It Fl N +Import the pool without mounting any file systems. +.It Fl o Ar mntopts +Comma-separated list of mount options to use when mounting datasets within the +pool. +See +.Xr zfs 8 +for a description of dataset properties and mount options. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property on the imported pool. +See the +.Xr zpoolprops 7 +manual page for more information on the available pool properties. +.It Fl R Ar root +Sets the +.Sy cachefile +property to +.Sy none +and the +.Sy altroot +property to +.Ar root . +.It Fl -rewind-to-checkpoint +Rewinds pool to the checkpointed state. +Once the pool is imported with this flag there is no way to undo the rewind. +All changes and data that were written after the checkpoint are lost! +The only exception is when the +.Sy readonly +mounting option is enabled. +In this case, the checkpointed state of the pool is opened and an +administrator can see how the pool would look like if they were +to fully rewind. +.It Fl s +Scan using the default search path, the libblkid cache will not be +consulted. +A custom search path may be specified by setting the +.Sy ZPOOL_IMPORT_PATH +environment variable. +.It Fl X +Used with the +.Fl F +recovery option. +Determines whether extreme measures to find a valid txg should take place. +This allows the pool to +be rolled back to a txg which is no longer guaranteed to be consistent. +Pools imported at an inconsistent txg may contain uncorrectable checksum errors. +For more details about pool recovery mode, see the +.Fl F +option, above. +WARNING: This option can be extremely hazardous to the +health of your pool and should only be used as a last resort. +.It Fl T +Specify the txg to use for rollback. +Implies +.Fl FX . +For more details +about pool recovery mode, see the +.Fl X +option, above. +WARNING: This option can be extremely hazardous to the +health of your pool and should only be used as a last resort. +.El +.It Xo +.Nm zpool +.Cm import +.Op Fl Dflmt +.Op Fl F Op Fl nTX +.Op Fl c Ar cachefile Ns | Ns Fl d Ar dir Ns | Ns Ar device +.Op Fl o Ar mntopts +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Op Fl R Ar root +.Op Fl s +.Ar pool Ns | Ns Ar id +.Op Ar newpool +.Xc +Imports a specific pool. +A pool can be identified by its name or the numeric identifier. +If +.Ar newpool +is specified, the pool is imported using the name +.Ar newpool . +Otherwise, it is imported with the same name as its exported name. +.Pp +If a device is removed from a system without running +.Nm zpool Cm export +first, the device appears as potentially active. +It cannot be determined if this was a failed export, or whether the device is +really in use from another host. +To import a pool in this state, the +.Fl f +option is required. +.Bl -tag -width Ds +.It Fl c Ar cachefile +Reads configuration from the given +.Ar cachefile +that was created with the +.Sy cachefile +pool property. +This +.Ar cachefile +is used instead of searching for devices. +.It Fl d Ar dir Ns | Ns Ar device +Uses +.Ar device +or searches for devices or files in +.Ar dir . +The +.Fl d +option can be specified multiple times. +This option is incompatible with the +.Fl c +option. +.It Fl D +Imports destroyed pool. +The +.Fl f +option is also required. +.It Fl f +Forces import, even if the pool appears to be potentially active. +.It Fl F +Recovery mode for a non-importable pool. +Attempt to return the pool to an importable state by discarding the last few +transactions. +Not all damaged pools can be recovered by using this option. +If successful, the data from the discarded transactions is irretrievably lost. +This option is ignored if the pool is importable or already imported. +.It Fl l +Indicates that this command will request encryption keys for all encrypted +datasets it attempts to mount as it is bringing the pool online. +Note that if any datasets have a +.Sy keylocation +of +.Sy prompt +this command will block waiting for the keys to be entered. +Without this flag +encrypted datasets will be left unavailable until the keys are loaded. +.It Fl m +Allows a pool to import when there is a missing log device. +Recent transactions can be lost because the log device will be discarded. +.It Fl n +Used with the +.Fl F +recovery option. +Determines whether a non-importable pool can be made importable again, but does +not actually perform the pool recovery. +For more details about pool recovery mode, see the +.Fl F +option, above. +.It Fl o Ar mntopts +Comma-separated list of mount options to use when mounting datasets within the +pool. +See +.Xr zfs 8 +for a description of dataset properties and mount options. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property on the imported pool. +See the +.Xr zpoolprops 7 +manual page for more information on the available pool properties. +.It Fl R Ar root +Sets the +.Sy cachefile +property to +.Sy none +and the +.Sy altroot +property to +.Ar root . +.It Fl s +Scan using the default search path, the libblkid cache will not be +consulted. +A custom search path may be specified by setting the +.Sy ZPOOL_IMPORT_PATH +environment variable. +.It Fl X +Used with the +.Fl F +recovery option. +Determines whether extreme measures to find a valid txg should take place. +This allows the pool to +be rolled back to a txg which is no longer guaranteed to be consistent. +Pools imported at an inconsistent txg may contain uncorrectable +checksum errors. +For more details about pool recovery mode, see the +.Fl F +option, above. +WARNING: This option can be extremely hazardous to the +health of your pool and should only be used as a last resort. +.It Fl T +Specify the txg to use for rollback. +Implies +.Fl FX . +For more details +about pool recovery mode, see the +.Fl X +option, above. +.Em WARNING : +This option can be extremely hazardous to the +health of your pool and should only be used as a last resort. +.It Fl t +Used with +.Ar newpool . +Specifies that +.Ar newpool +is temporary. +Temporary pool names last until export. +Ensures that the original pool name will be used +in all label updates and therefore is retained upon export. +Will also set +.Fl o Sy cachefile Ns = Ns Sy none +when not explicitly specified. +.El +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 9 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 9 : No Importing a ZFS Storage Pool +The following command displays available pools, and then imports the pool +.Ar tank +for use on the system. +The results from this command are similar to the following: +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm import + pool: tank + id: 15451357997522795478 + state: ONLINE +action: The pool can be imported using its name or numeric identifier. +config: + + tank ONLINE + mirror ONLINE + sda ONLINE + sdb ONLINE + +.No # Nm zpool Cm import Ar tank +.Ed +. +.Sh SEE ALSO +.Xr zpool-export 8 , +.Xr zpool-list 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-initialize.8 b/sys/contrib/openzfs/man/man8/zpool-initialize.8 new file mode 100644 index 000000000000..5299a897cb97 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-initialize.8 @@ -0,0 +1,87 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2025 Hewlett Packard Enterprise Development LP. +.\" +.Dd July 30, 2025 +.Dt ZPOOL-INITIALIZE 8 +.Os +. +.Sh NAME +.Nm zpool-initialize +.Nd write to unallocated regions of ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm initialize +.Op Fl c Ns | Ns Fl s | Ns Fl u +.Op Fl w +.Fl a Ns | Ns Ar pool +.Oo Ar device Oc Ns … +. +.Sh DESCRIPTION +Begins initializing by writing to all unallocated regions on the specified +devices, or all eligible devices in the pool if no individual devices are +specified. +Only leaf data or log devices may be initialized. +.Bl -tag -width Ds +.It Fl a , -all +Begin, cancel, suspend initializing on +all +pools. +.It Fl c , -cancel +Cancel initializing on the specified devices, or all eligible devices if none +are specified. +If one or more target devices are invalid or are not currently being +initialized, the command will fail and no cancellation will occur on any device. +.It Fl s , -suspend +Suspend initializing on the specified devices, or all eligible devices if none +are specified. +If one or more target devices are invalid or are not currently being +initialized, the command will fail and no suspension will occur on any device. +Initializing can then be resumed by running +.Nm zpool Cm initialize +with no flags on the relevant target devices. +.It Fl u , -uninit +Clears the initialization state on the specified devices, or all eligible +devices if none are specified. +If the devices are being actively initialized the command will fail. +After being cleared +.Nm zpool Cm initialize +with no flags can be used to re-initialize all unallocated regions on +the relevant target devices. +.It Fl w , -wait +Wait until the devices have finished initializing before returning. +.El +. +.Sh SEE ALSO +.Xr zpool-add 8 , +.Xr zpool-attach 8 , +.Xr zpool-create 8 , +.Xr zpool-online 8 , +.Xr zpool-replace 8 , +.Xr zpool-trim 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-iostat.8 b/sys/contrib/openzfs/man/man8/zpool-iostat.8 new file mode 100644 index 000000000000..5dd9c9d55e20 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-iostat.8 @@ -0,0 +1,307 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd January 29, 2024 +.Dt ZPOOL-IOSTAT 8 +.Os +. +.Sh NAME +.Nm zpool-iostat +.Nd display logical I/O statistics for ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm iostat +.Op Oo Oo Fl c Ar SCRIPT Oc Oo Fl lq Oc Oc Ns | Ns Fl rw +.Op Fl T Sy u Ns | Ns Sy d +.Op Fl ghHLnpPvy +.Oo Ar pool Ns … Ns | Ns Oo Ar pool vdev Ns … Oc Ns | Ns Ar vdev Ns … Oc +.Op Ar interval Op Ar count +. +.Sh DESCRIPTION +Displays logical I/O statistics for the given pools/vdevs. +Physical I/O statistics may be observed via +.Xr iostat 1 . +If writes are located nearby, they may be merged into a single +larger operation. +Additional I/O may be generated depending on the level of vdev redundancy. +To filter output, you may pass in a list of pools, a pool and list of vdevs +in that pool, or a list of any vdevs from any pool. +If no items are specified, statistics for every pool in the system are shown. +When given an +.Ar interval , +the statistics are printed every +.Ar interval +seconds until killed. +If +.Fl n +flag is specified the headers are displayed only once, otherwise they are +displayed periodically. +If +.Ar count +is specified, the command exits after +.Ar count +reports are printed. +The first report printed is always the statistics since boot regardless of +whether +.Ar interval +and +.Ar count +are passed. +However, this behavior can be suppressed with the +.Fl y +flag. +Also note that the units of +.Sy K , +.Sy M , +.Sy G Ns … +that are printed in the report are in base 1024. +To get the raw values, use the +.Fl p +flag. +.Bl -tag -width Ds +.It Fl c Op Ar SCRIPT1 Ns Oo , Ns Ar SCRIPT2 Oc Ns … +Run a script (or scripts) on each vdev and include the output as a new column +in the +.Nm zpool Cm iostat +output. +Users can run any script found in their +.Pa ~/.zpool.d +directory or from the system +.Pa /etc/zfs/zpool.d +directory. +Script names containing the slash +.Pq Sy / +character are not allowed. +The default search path can be overridden by setting the +.Sy ZPOOL_SCRIPTS_PATH +environment variable. +A privileged user can only run +.Fl c +if they have the +.Sy ZPOOL_SCRIPTS_AS_ROOT +environment variable set. +If a script requires the use of a privileged command, like +.Xr smartctl 8 , +then it's recommended you allow the user access to it in +.Pa /etc/sudoers +or add the user to the +.Pa /etc/sudoers.d/zfs +file. +.Pp +If +.Fl c +is passed without a script name, it prints a list of all scripts. +.Fl c +also sets verbose mode +.No \&( Ns Fl v Ns No \&) . +.Pp +Script output should be in the form of "name=value". +The column name is set to "name" and the value is set to "value". +Multiple lines can be used to output multiple columns. +The first line of output not in the +"name=value" format is displayed without a column title, +and no more output after that is displayed. +This can be useful for printing error messages. +Blank or NULL values are printed as a '-' to make output AWKable. +.Pp +The following environment variables are set before running each script: +.Bl -tag -compact -width "VDEV_ENC_SYSFS_PATH" +.It Sy VDEV_PATH +Full path to the vdev +.It Sy VDEV_UPATH +Underlying path to the vdev +.Pq Pa /dev/sd* . +For use with device mapper, multipath, or partitioned vdevs. +.It Sy VDEV_ENC_SYSFS_PATH +The sysfs path to the enclosure for the vdev (if any). +.El +.It Fl T Sy u Ns | Ns Sy d +Display a time stamp. +Specify +.Sy u +for a printed representation of the internal representation of time. +See +.Xr time 1 . +Specify +.Sy d +for standard date format. +See +.Xr date 1 . +.It Fl g +Display vdev GUIDs instead of the normal device names. +These GUIDs can be used in place of device names for the zpool +detach/offline/remove/replace commands. +.It Fl H +Scripted mode. +Do not display headers, and separate fields by a +single tab instead of arbitrary space. +.It Fl L +Display real paths for vdevs resolving all symbolic links. +This can be used to look up the current block device name regardless of the +.Pa /dev/disk/ +path used to open it. +.It Fl n +Print headers only once when passed +.It Fl p +Display numbers in parsable (exact) values. +Time values are in nanoseconds. +.It Fl P +Display full paths for vdevs instead of only the last component of the path. +This can be used in conjunction with the +.Fl L +flag. +.It Fl r +Print request size histograms for the leaf vdev's I/O. +This includes histograms of individual I/O (ind) and aggregate I/O (agg). +These stats can be useful for observing how well I/O aggregation is working. +Note that TRIM I/O may exceed 16M, but will be counted as 16M. +.It Fl v +Verbose statistics Reports usage statistics for individual vdevs within the +pool, in addition to the pool-wide statistics. +.It Fl y +Normally the first line of output reports the statistics since boot: +suppress it. +.It Fl w +Display latency histograms: +.Bl -tag -compact -width "asyncq_read/write" +.It Sy total_wait +Total I/O time (queuing + disk I/O time). +.It Sy disk_wait +Disk I/O time (time reading/writing the disk). +.It Sy syncq_wait +Amount of time I/O spent in synchronous priority queues. +Does not include disk time. +.It Sy asyncq_wait +Amount of time I/O spent in asynchronous priority queues. +Does not include disk time. +.It Sy scrub +Amount of time I/O spent in scrub queue. +Does not include disk time. +.It Sy rebuild +Amount of time I/O spent in rebuild queue. +Does not include disk time. +.El +.It Fl l +Include average latency statistics: +.Bl -tag -compact -width "asyncq_read/write" +.It Sy total_wait +Average total I/O time (queuing + disk I/O time). +.It Sy disk_wait +Average disk I/O time (time reading/writing the disk). +.It Sy syncq_wait +Average amount of time I/O spent in synchronous priority queues. +Does not include disk time. +.It Sy asyncq_wait +Average amount of time I/O spent in asynchronous priority queues. +Does not include disk time. +.It Sy scrub +Average queuing time in scrub queue. +Does not include disk time. +.It Sy trim +Average queuing time in trim queue. +Does not include disk time. +.It Sy rebuild +Average queuing time in rebuild queue. +Does not include disk time. +.El +.It Fl q +Include active queue statistics. +Each priority queue has both pending +.Sy ( pend ) +and active +.Sy ( activ ) +I/O requests. +Pending requests are waiting to be issued to the disk, +and active requests have been issued to disk and are waiting for completion. +These stats are broken out by priority queue: +.Bl -tag -compact -width "asyncq_read/write" +.It Sy syncq_read/write +Current number of entries in synchronous priority +queues. +.It Sy asyncq_read/write +Current number of entries in asynchronous priority queues. +.It Sy scrubq_read +Current number of entries in scrub queue. +.It Sy trimq_write +Current number of entries in trim queue. +.It Sy rebuildq_write +Current number of entries in rebuild queue. +.El +.Pp +All queue statistics are instantaneous measurements of the number of +entries in the queues. +If you specify an interval, +the measurements will be sampled from the end of the interval. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 13, 16 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 13 : No Adding Cache Devices to a ZFS Pool +The following command adds two disks for use as cache devices to a ZFS storage +pool: +.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd +.Pp +Once added, the cache devices gradually fill with content from main memory. +Depending on the size of your cache devices, it could take over an hour for +them to fill. +Capacity and reads can be monitored using the +.Cm iostat +subcommand as follows: +.Dl # Nm zpool Cm iostat Fl v Ar pool 5 +. +.Ss Example 16 : No Adding output columns +Additional columns can be added to the +.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c . +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size + NAME STATE READ WRITE CKSUM vendor model size + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + +.No # Nm zpool Cm iostat Fl vc Pa size + capacity operations bandwidth +pool alloc free read write read write size +---------- ----- ----- ----- ----- ----- ----- ---- +rpool 14.6G 54.9G 4 55 250K 2.69M + sda1 14.6G 54.9G 4 55 250K 2.69M 70G +---------- ----- ----- ----- ----- ----- ----- ---- +.Ed +. +.Sh SEE ALSO +.Xr iostat 1 , +.Xr smartctl 8 , +.Xr zpool-list 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-labelclear.8 b/sys/contrib/openzfs/man/man8/zpool-labelclear.8 new file mode 100644 index 000000000000..b807acaaede3 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-labelclear.8 @@ -0,0 +1,62 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-LABELCLEAR 8 +.Os +. +.Sh NAME +.Nm zpool-labelclear +.Nd remove ZFS label information from device +.Sh SYNOPSIS +.Nm zpool +.Cm labelclear +.Op Fl f +.Ar device +. +.Sh DESCRIPTION +Removes ZFS label information from the specified +.Ar device . +If the +.Ar device +is a cache device, it also removes the L2ARC header +(persistent L2ARC). +The +.Ar device +must not be part of an active pool configuration. +.Bl -tag -width Ds +.It Fl f +Treat exported or foreign devices as inactive. +.El +. +.Sh SEE ALSO +.Xr zpool-destroy 8 , +.Xr zpool-detach 8 , +.Xr zpool-remove 8 , +.Xr zpool-replace 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-list.8 b/sys/contrib/openzfs/man/man8/zpool-list.8 new file mode 100644 index 000000000000..106399941f98 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-list.8 @@ -0,0 +1,254 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd October 12, 2024 +.Dt ZPOOL-LIST 8 +.Os +. +.Sh NAME +.Nm zpool-list +.Nd list information about ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm list +.Op Fl HgLpPv +.Op Fl j Op Ar --json-int, --json-pool-key-guid +.Op Fl o Ar property Ns Oo , Ns Ar property Oc Ns … +.Op Fl T Sy u Ns | Ns Sy d +.Oo Ar pool Oc Ns … +.Op Ar interval Op Ar count +. +.Sh DESCRIPTION +Lists the given pools along with a health status and space usage. +If no +.Ar pool Ns s +are specified, all pools in the system are listed. +When given an +.Ar interval , +the information is printed every +.Ar interval +seconds until killed. +If +.Ar count +is specified, the command exits after +.Ar count +reports are printed. +.Bl -tag -width Ds +.It Fl j , -json Op Ar --json-int, --json-pool-key-guid +Display the list of pools in JSON format. +Specify +.Sy --json-int +to display the numbers in integer format instead of strings. +Specify +.Sy --json-pool-key-guid +to set pool GUID as key for pool objects instead of pool names. +.It Fl g +Display vdev GUIDs instead of the normal device names. +These GUIDs can be used in place of device names for the zpool +detach/offline/remove/replace commands. +.It Fl H +Scripted mode. +Do not display headers, and separate fields by a single tab instead of arbitrary +space. +.It Fl o Ar property +Comma-separated list of properties to display. +See the +.Xr zpoolprops 7 +manual page for a list of valid properties. +The default list is +.Sy name , size , allocated , free , checkpoint, expandsize , fragmentation , +.Sy capacity , dedupratio , health , altroot . +.It Fl L +Display real paths for vdevs resolving all symbolic links. +This can be used to look up the current block device name regardless of the +.Pa /dev/disk +path used to open it. +.It Fl p +Display numbers in parsable +.Pq exact +values. +.It Fl P +Display full paths for vdevs instead of only the last component of +the path. +This can be used in conjunction with the +.Fl L +flag. +.It Fl T Sy u Ns | Ns Sy d +Display a time stamp. +Specify +.Sy u +for a printed representation of the internal representation of time. +See +.Xr time 1 . +Specify +.Sy d +for standard date format. +See +.Xr date 1 . +.It Fl v +Verbose statistics. +Reports usage statistics for individual vdevs within the pool, in addition to +the pool-wide statistics. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 6, 15 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Listing Available ZFS Storage Pools +The following command lists all available pools on the system. +In this case, the pool +.Ar zion +is faulted due to a missing device. +The results from this command are similar to the following: +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm list +NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT +rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE - +tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE - +zion - - - - - - - FAULTED - +.Ed +. +.Ss Example 2 : No Displaying expanded space on a device +The following command displays the detailed information for the pool +.Ar data . +This pool is comprised of a single raidz vdev where one of its devices +increased its capacity by 10 GiB. +In this example, the pool will not be able to utilize this extra capacity until +all the devices under the raidz vdev have been expanded. +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm list Fl v Ar data +NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT +data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE - + raidz1 23.9G 14.6G 9.30G - 48% + sda - - - - - + sdb - - - 10G - + sdc - - - - - +.Ed +. +.Ss Example 3 : No Displaying expanded space on a device +The following command lists all available pools on the system in JSON +format. +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm list Fl j | Nm jq +{ + "output_version": { + "command": "zpool list", + "vers_major": 0, + "vers_minor": 1 + }, + "pools": { + "tank": { + "name": "tank", + "type": "POOL", + "state": "ONLINE", + "guid": "15220353080205405147", + "txg": "2671", + "spa_version": "5000", + "zpl_version": "5", + "properties": { + "size": { + "value": "111G", + "source": { + "type": "NONE", + "data": "-" + } + }, + "allocated": { + "value": "30.8G", + "source": { + "type": "NONE", + "data": "-" + } + }, + "free": { + "value": "80.2G", + "source": { + "type": "NONE", + "data": "-" + } + }, + "checkpoint": { + "value": "-", + "source": { + "type": "NONE", + "data": "-" + } + }, + "expandsize": { + "value": "-", + "source": { + "type": "NONE", + "data": "-" + } + }, + "fragmentation": { + "value": "0%", + "source": { + "type": "NONE", + "data": "-" + } + }, + "capacity": { + "value": "27%", + "source": { + "type": "NONE", + "data": "-" + } + }, + "dedupratio": { + "value": "1.00x", + "source": { + "type": "NONE", + "data": "-" + } + }, + "health": { + "value": "ONLINE", + "source": { + "type": "NONE", + "data": "-" + } + }, + "altroot": { + "value": "-", + "source": { + "type": "DEFAULT", + "data": "-" + } + } + } + } + } +} + +.Ed +. +.Sh SEE ALSO +.Xr zpool-import 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-offline.8 b/sys/contrib/openzfs/man/man8/zpool-offline.8 new file mode 100644 index 000000000000..388c7634acce --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-offline.8 @@ -0,0 +1,107 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd December 21, 2023 +.Dt ZPOOL-OFFLINE 8 +.Os +. +.Sh NAME +.Nm zpool-offline +.Nd take physical devices offline in ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm offline +.Op Fl Sy -power Ns | Ns Op Fl Sy ft +.Ar pool +.Ar device Ns … +.Nm zpool +.Cm online +.Op Fl Sy -power +.Op Fl Sy e +.Ar pool +.Ar device Ns … +. +.Sh DESCRIPTION +.Bl -tag -width Ds +.It Xo +.Nm zpool +.Cm offline +.Op Fl Sy -power Ns | Ns Op Fl Sy ft +.Ar pool +.Ar device Ns … +.Xc +Takes the specified physical device offline. +While the +.Ar device +is offline, no attempt is made to read or write to the device. +This command is not applicable to spares. +.Bl -tag -width Ds +.It Fl -power +Power off the device's slot in the storage enclosure. +This flag currently works on Linux only +.It Fl f +Force fault. +Instead of offlining the disk, put it into a faulted state. +The fault will persist across imports unless the +.Fl t +flag was specified. +.It Fl t +Temporary. +Upon reboot, the specified physical device reverts to its previous state. +.El +.It Xo +.Nm zpool +.Cm online +.Op Fl -power +.Op Fl e +.Ar pool +.Ar device Ns … +.Xc +Brings the specified physical device online. +This command is not applicable to spares. +.Bl -tag -width Ds +.It Fl -power +Power on the device's slot in the storage enclosure and wait for the device +to show up before attempting to online it. +Alternatively, you can set the +.Sy ZPOOL_AUTO_POWER_ON_SLOT +environment variable to always enable this behavior. +This flag currently works on Linux only +.It Fl e +Expand the device to use all available space. +If the device is part of a mirror or raidz then all devices must be expanded +before the new space will become available to the pool. +.El +.El +. +.Sh SEE ALSO +.Xr zpool-detach 8 , +.Xr zpool-remove 8 , +.Xr zpool-reopen 8 , +.Xr zpool-resilver 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-online.8 b/sys/contrib/openzfs/man/man8/zpool-online.8 new file mode 120000 index 000000000000..537e00e1c4b0 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-online.8 @@ -0,0 +1 @@ +zpool-offline.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zpool-prefetch.8 b/sys/contrib/openzfs/man/man8/zpool-prefetch.8 new file mode 100644 index 000000000000..a36ad52e681e --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-prefetch.8 @@ -0,0 +1,47 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or http://www.opensolaris.org/os/licensing. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" +.\" Copyright (c) 2023, Klara Inc. +.\" +.Dd February 14, 2024 +.Dt ZPOOL-PREFETCH 8 +.Os +. +.Sh NAME +.Nm zpool-prefetch +.Nd Loads specific types of data for the given pool +.Sh SYNOPSIS +.Nm zpool +.Cm prefetch +.Fl t Ar type +.Ar pool +.Sh DESCRIPTION +.Bl -tag -width Ds +.It Xo +.Nm zpool +.Cm prefetch +.Fl t Li ddt +.Ar pool +.Xc +Prefetch data of a specific type for the given pool; specifically the DDT, +which will improve write I/O performance when the DDT is resident in the ARC. +.El diff --git a/sys/contrib/openzfs/man/man8/zpool-reguid.8 b/sys/contrib/openzfs/man/man8/zpool-reguid.8 new file mode 100644 index 000000000000..b98c88e320de --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-reguid.8 @@ -0,0 +1,61 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2024, Klara Inc. +.\" Copyright (c) 2024, Mateusz Piotrowski +.\" +.Dd August 26, 2024 +.Dt ZPOOL-REGUID 8 +.Os +. +.Sh NAME +.Nm zpool-reguid +.Nd generate new unique identifier for ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm reguid +.Op Fl g Ar guid +.Ar pool +. +.Sh DESCRIPTION +Generates a new unique identifier for the pool. +You must ensure that all devices in this pool are online and healthy before +performing this action. +. +.Bl -tag -width Ds +.It Fl g Ar guid +Set the pool GUID to the provided value. +The GUID can be any 64-bit value accepted by +.Xr strtoull 3 +in base 10. +.Nm +will return an error if the provided GUID is already in use. +.El +.Sh SEE ALSO +.Xr zpool-export 8 , +.Xr zpool-import 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-remove.8 b/sys/contrib/openzfs/man/man8/zpool-remove.8 new file mode 100644 index 000000000000..4d5fc431d332 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-remove.8 @@ -0,0 +1,190 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd November 19, 2024 +.Dt ZPOOL-REMOVE 8 +.Os +. +.Sh NAME +.Nm zpool-remove +.Nd remove devices from ZFS storage pool +. +.Sh SYNOPSIS +.Nm zpool +.Cm remove +.Op Fl npw +.Ar pool Ar device Ns … +.Nm zpool +.Cm remove +.Fl s +.Ar pool +. +.Sh DESCRIPTION +.Bl -tag -width Ds +.It Xo +.Nm zpool +.Cm remove +.Op Fl npw +.Ar pool Ar device Ns … +.Xc +Removes the specified device from the pool. +This command supports removing hot spare, cache, log, and both mirrored and +non-redundant primary top-level vdevs, including dedup and special vdevs. +.Pp +Top-level vdevs can only be removed if the primary pool storage does not contain +a top-level raidz vdev, all top-level vdevs have the same sector size, and the +keys for all encrypted datasets are loaded. +.Pp +Removing a top-level vdev reduces the total amount of space in the storage pool. +The specified device will be evacuated by copying all allocated space from it to +the other devices in the pool. +In this case, the +.Nm zpool Cm remove +command initiates the removal and returns, while the evacuation continues in +the background. +The removal progress can be monitored with +.Nm zpool Cm status . +If an I/O error is encountered during the removal process it will be canceled. +The +.Sy device_removal +feature flag must be enabled to remove a top-level vdev, see +.Xr zpool-features 7 . +.Pp +A mirrored top-level device (log or data) can be removed by specifying the top- +level mirror for the +same. +Non-log devices or data devices that are part of a mirrored configuration can be +removed using +the +.Nm zpool Cm detach +command. +.Bl -tag -width Ds +.It Fl n +Do not actually perform the removal +.Pq Qq No-op . +Instead, print the estimated amount of memory that will be used by the +mapping table after the removal completes. +This is nonzero only for top-level vdevs. +.El +.Bl -tag -width Ds +.It Fl p +Used in conjunction with the +.Fl n +flag, displays numbers as parsable (exact) values. +.It Fl w +Waits until the removal has completed before returning. +.El +.It Xo +.Nm zpool +.Cm remove +.Fl s +.Ar pool +.Xc +Stops and cancels an in-progress removal of a top-level vdev. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 15 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Removing a Mirrored top-level (Log or Data) Device +The following commands remove the mirrored log device +.Sy mirror-2 +and mirrored top-level data device +.Sy mirror-1 . +.Pp +Given this configuration: +.Bd -literal -compact -offset Ds + pool: tank + state: ONLINE + scrub: none requested +config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 + mirror-1 ONLINE 0 0 0 + sdc ONLINE 0 0 0 + sdd ONLINE 0 0 0 + logs + mirror-2 ONLINE 0 0 0 + sde ONLINE 0 0 0 + sdf ONLINE 0 0 0 +.Ed +.Pp +The command to remove the mirrored log +.Ar mirror-2 No is : +.Dl # Nm zpool Cm remove Ar tank mirror-2 +.Pp +At this point, the log device no longer exists +(both sides of the mirror have been removed): +.Bd -literal -compact -offset Ds + pool: tank + state: ONLINE + scan: none requested +config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 + mirror-1 ONLINE 0 0 0 + sdc ONLINE 0 0 0 + sdd ONLINE 0 0 0 +.Ed +.Pp +The command to remove the mirrored data +.Ar mirror-1 No is : +.Dl # Nm zpool Cm remove Ar tank mirror-1 +.Pp +After +.Ar mirror-1 No has been evacuated, the pool remains redundant, but +the total amount of space is reduced: +.Bd -literal -compact -offset Ds + pool: tank + state: ONLINE + scan: none requested +config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 +.Ed +. +.Sh SEE ALSO +.Xr zpool-add 8 , +.Xr zpool-detach 8 , +.Xr zpool-labelclear 8 , +.Xr zpool-offline 8 , +.Xr zpool-replace 8 , +.Xr zpool-split 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-reopen.8 b/sys/contrib/openzfs/man/man8/zpool-reopen.8 new file mode 100644 index 000000000000..c4e10f0a546e --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-reopen.8 @@ -0,0 +1,53 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-REOPEN 8 +.Os +. +.Sh NAME +.Nm zpool-reopen +.Nd reopen vdevs associated with ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm reopen +.Op Fl n +.Oo Ar pool Oc Ns … +. +.Sh DESCRIPTION +Reopen all vdevs associated with the specified pools, +or all pools if none specified. +. +.Sh OPTIONS +.Bl -tag -width "-n" +.It Fl n +Do not restart an in-progress scrub operation. +This is not recommended and can +result in partially resilvered devices unless a second scrub is performed. +.El diff --git a/sys/contrib/openzfs/man/man8/zpool-replace.8 b/sys/contrib/openzfs/man/man8/zpool-replace.8 new file mode 100644 index 000000000000..651af13b19b8 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-replace.8 @@ -0,0 +1,100 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-REPLACE 8 +.Os +. +.Sh NAME +.Nm zpool-replace +.Nd replace one device with another in ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm replace +.Op Fl fsw +.Oo Fl o Ar property Ns = Ns Ar value Oc +.Ar pool Ar device Op Ar new-device +. +.Sh DESCRIPTION +Replaces +.Ar device +with +.Ar new-device . +This is equivalent to attaching +.Ar new-device , +waiting for it to resilver, and then detaching +.Ar device . +Any in progress scrub will be canceled. +.Pp +The size of +.Ar new-device +must be greater than or equal to the minimum size of all the devices in a mirror +or raidz configuration. +.Pp +.Ar new-device +is required if the pool is not redundant. +If +.Ar new-device +is not specified, it defaults to +.Ar device . +This form of replacement is useful after an existing disk has failed and has +been physically replaced. +In this case, the new disk may have the same +.Pa /dev +path as the old device, even though it is actually a different disk. +ZFS recognizes this. +.Bl -tag -width Ds +.It Fl f +Forces use of +.Ar new-device , +even if it appears to be in use. +Not all devices can be overridden in this manner. +.It Fl o Ar property Ns = Ns Ar value +Sets the given pool properties. +See the +.Xr zpoolprops 7 +manual page for a list of valid properties that can be set. +The only property supported at the moment is +.Sy ashift . +.It Fl s +The +.Ar new-device +is reconstructed sequentially to restore redundancy as quickly as possible. +Checksums are not verified during sequential reconstruction so a scrub is +started when the resilver completes. +Sequential reconstruction is not supported for raidz configurations. +.It Fl w +Waits until the replacement has completed before returning. +.El +. +.Sh SEE ALSO +.Xr zpool-detach 8 , +.Xr zpool-initialize 8 , +.Xr zpool-online 8 , +.Xr zpool-resilver 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-resilver.8 b/sys/contrib/openzfs/man/man8/zpool-resilver.8 new file mode 100644 index 000000000000..59c4be5db209 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-resilver.8 @@ -0,0 +1,58 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-RESILVER 8 +.Os +. +.Sh NAME +.Nm zpool-resilver +.Nd resilver devices in ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm resilver +.Ar pool Ns … +. +.Sh DESCRIPTION +Starts a resilver of the specified pools. +If an existing resilver is already running it will be restarted from the +beginning. +Any drives that were scheduled for a deferred +resilver will be added to the new one. +This requires the +.Sy resilver_defer +pool feature. +. +.Sh SEE ALSO +.Xr zpool-iostat 8 , +.Xr zpool-online 8 , +.Xr zpool-reopen 8 , +.Xr zpool-replace 8 , +.Xr zpool-scrub 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-scrub.8 b/sys/contrib/openzfs/man/man8/zpool-scrub.8 new file mode 100644 index 000000000000..cf7ead5788bf --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-scrub.8 @@ -0,0 +1,210 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018, 2021 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2025 Hewlett Packard Enterprise Development LP. +.\" +.Dd August 6, 2025 +.Dt ZPOOL-SCRUB 8 +.Os +. +.Sh NAME +.Nm zpool-scrub +.Nd begin or resume scrub of ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm scrub +.Op Ns Fl e | Ns Fl p | Fl s Ns | Fl C Ns +.Op Fl w +.Op Fl S Ar date +.Op Fl E Ar date +.Fl a Ns | Ns Ar pool Ns … +. +.Sh DESCRIPTION +Begins a scrub or resumes a paused scrub. +The scrub examines all data in the specified pools to verify that it checksums +correctly. +For replicated +.Pq mirror, raidz, or draid +devices, ZFS automatically repairs any damage discovered during the scrub. +The +.Nm zpool Cm status +command reports the progress of the scrub and summarizes the results of the +scrub upon completion. +.Pp +Scrubbing and resilvering are very similar operations. +The difference is that resilvering only examines data that ZFS knows to be out +of date +.Po +for example, when attaching a new device to a mirror or replacing an existing +device +.Pc , +whereas scrubbing examines all data to discover silent errors due to hardware +faults or disk failure. +.Pp +When scrubbing a pool with encrypted filesystems the keys do not need to be +loaded. +However, if the keys are not loaded and an unrepairable checksum error is +detected the file name cannot be included in the +.Nm zpool Cm status Fl v +verbose error report. +.Pp +Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows +one at a time. +.Pp +A scrub is split into two parts: metadata scanning and block scrubbing. +The metadata scanning sorts blocks into large sequential ranges which can then +be read much more efficiently from disk when issuing the scrub I/O. +.Pp +If a scrub is paused, the +.Nm zpool Cm scrub +resumes it. +If a resilver is in progress, ZFS does not allow a scrub to be started until the +resilver completes. +.Pp +Note that, due to changes in pool data on a live system, it is possible for +scrubs to progress slightly beyond 100% completion. +During this period, no completion time estimate will be provided. +. +.Sh OPTIONS +.Bl -tag -width "-s" +.It Fl a , -all +Begin, pause, stop scrub on +all +pools. +Initiating scrubs on multiple pools can put considerable load and memory +pressure on the system, so this operation should be performed with caution. +.It Fl s +Stop scrubbing. +.It Fl p +Pause scrubbing. +Scrub pause state and progress are periodically synced to disk. +If the system is restarted or pool is exported during a paused scrub, +even after import, scrub will remain paused until it is resumed. +Once resumed the scrub will pick up from the place where it was last +checkpointed to disk. +To resume a paused scrub issue +.Nm zpool Cm scrub +or +.Nm zpool Cm scrub +.Fl e +again. +.It Fl w +Wait until scrub has completed before returning. +.It Fl e +Only scrub files with known data errors as reported by +.Nm zpool Cm status Fl v . +The pool must have been scrubbed at least once with the +.Sy head_errlog +feature enabled to use this option. +Error scrubbing cannot be run simultaneously with regular scrubbing or +resilvering, nor can it be run when a regular scrub is paused. +.It Fl C +Continue scrub from last saved txg (see zpool +.Sy last_scrubbed_txg +property). +.It Fl S Ar date , Fl E Ar date +Allows specifying the date range for blocks created between these dates. +.Bl -bullet -compact -offset indent +.It +.Fl S +Defines a start date. +If not specified, scrubbing begins from the start of the pool's +existence. +.It +.Fl E +Defines an end date. +If not specified, scrubbing continues up to the most recent data. +.El +The provided date should be in the format: +.Dq YYYY-MM-DD HH:MM . +Where: +.Bl -bullet -compact -offset indent +.It +.Dq YYYY +is the year. +.It +.Dq MM +is the numeric representation of the month. +.It +.Dq DD +is the day of the month. +.It +.Dq HH +is the hour. +.It +.Dq MM +is the minutes. +.El +The hour and minutes parameters can be omitted. +The time should be provided in machine local time zone. +Specifying dates prior to enabling this feature will result in scrubbing +starting from the date the pool was created. +If the time was moved backward manually the data range may become inaccurate. +.El +.Sh EXAMPLES +.Ss Example 1 +Status of pool with ongoing scrub: +.sp +.Bd -literal -compact +.No # Nm zpool Cm status + ... + scan: scrub in progress since Sun Jul 25 16:07:49 2021 + 403M / 405M scanned at 100M/s, 68.4M / 405M issued at 10.0M/s + 0B repaired, 16.91% done, 00:00:04 to go + ... +.Ed +.Pp +Where metadata which references 403M of file data has been +scanned at 100M/s, and 68.4M of that file data has been +scrubbed sequentially at 10.0M/s. +.Sh PERIODIC SCRUB +On machines using systemd, scrub timers can be enabled on per-pool basis. +.Nm weekly +and +.Nm monthly +timer units are provided. +.Bl -tag -width Ds +.It Xo +.Xc +.Nm systemctl +.Cm enable +.Cm zfs-scrub-\fIweekly\fB@\fIrpool\fB.timer +.Cm --now +.It Xo +.Xc +.Nm systemctl +.Cm enable +.Cm zfs-scrub-\fImonthly\fB@\fIotherpool\fB.timer +.Cm --now +.El +. +.Sh SEE ALSO +.Xr systemd.timer 5 , +.Xr zpool-iostat 8 , +.Xr zpool-resilver 8 , +.Xr zpool-status 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-set.8 b/sys/contrib/openzfs/man/man8/zpool-set.8 new file mode 120000 index 000000000000..2b8b8cfb7e1c --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-set.8 @@ -0,0 +1 @@ +zpool-get.8
\ No newline at end of file diff --git a/sys/contrib/openzfs/man/man8/zpool-split.8 b/sys/contrib/openzfs/man/man8/zpool-split.8 new file mode 100644 index 000000000000..ee4c6384cf23 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-split.8 @@ -0,0 +1,118 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-SPLIT 8 +.Os +. +.Sh NAME +.Nm zpool-split +.Nd split devices off ZFS storage pool, creating new pool +.Sh SYNOPSIS +.Nm zpool +.Cm split +.Op Fl gLlnP +.Oo Fl o Ar property Ns = Ns Ar value Oc Ns … +.Op Fl R Ar root +.Ar pool newpool +.Oo Ar device Oc Ns … +. +.Sh DESCRIPTION +Splits devices off +.Ar pool +creating +.Ar newpool . +All vdevs in +.Ar pool +must be mirrors and the pool must not be in the process of resilvering. +At the time of the split, +.Ar newpool +will be a replica of +.Ar pool . +By default, the +last device in each mirror is split from +.Ar pool +to create +.Ar newpool . +.Pp +The optional device specification causes the specified device(s) to be +included in the new +.Ar pool +and, should any devices remain unspecified, +the last device in each mirror is used as would be by default. +.Bl -tag -width Ds +.It Fl g +Display vdev GUIDs instead of the normal device names. +These GUIDs can be used in place of device names for the zpool +detach/offline/remove/replace commands. +.It Fl L +Display real paths for vdevs resolving all symbolic links. +This can be used to look up the current block device name regardless of the +.Pa /dev/disk/ +path used to open it. +.It Fl l +Indicates that this command will request encryption keys for all encrypted +datasets it attempts to mount as it is bringing the new pool online. +Note that if any datasets have +.Sy keylocation Ns = Ns Sy prompt , +this command will block waiting for the keys to be entered. +Without this flag, encrypted datasets will be left unavailable until the keys +are loaded. +.It Fl n +Do a dry-run +.Pq Qq No-op +split: do not actually perform it. +Print out the expected configuration of +.Ar newpool . +.It Fl P +Display full paths for vdevs instead of only the last component of +the path. +This can be used in conjunction with the +.Fl L +flag. +.It Fl o Ar property Ns = Ns Ar value +Sets the specified property for +.Ar newpool . +See the +.Xr zpoolprops 7 +manual page for more information on the available pool properties. +.It Fl R Ar root +Set +.Sy altroot +for +.Ar newpool +to +.Ar root +and automatically import it. +.El +. +.Sh SEE ALSO +.Xr zpool-import 8 , +.Xr zpool-list 8 , +.Xr zpool-remove 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-status.8 b/sys/contrib/openzfs/man/man8/zpool-status.8 new file mode 100644 index 000000000000..108a1067b384 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-status.8 @@ -0,0 +1,372 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd May 20, 2025 +.Dt ZPOOL-STATUS 8 +.Os +. +.Sh NAME +.Nm zpool-status +.Nd show detailed health status for ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm status +.Op Fl DdegiLPpstvx +.Op Fl c Ar script1 Ns Oo , Ns Ar script2 Ns ,… Oc +.Oo Fl j|--json +.Oo Ns Fl -json-flat-vdevs Oc +.Oo Ns Fl -json-int Oc +.Oo Ns Fl -json-pool-key-guid Oc +.Oc +.Op Fl T Ar d|u +.Op Fl -power +.Op Ar pool +.Op Ar interval Op Ar count +. +.Sh DESCRIPTION +Displays the detailed health status for the given pools. +If no +.Ar pool +is specified, then the status of each pool in the system is displayed. +For more information on pool and device health, see the +.Sx Device Failure and Recovery +section of +.Xr zpoolconcepts 7 . +.Pp +If a scrub or resilver is in progress, this command reports the percentage done +and the estimated time to completion. +Both of these are only approximate, because the amount of data in the pool and +the other workloads on the system can change. +.Bl -tag -width Ds +.It Fl c Ar script1 Ns Oo , Ns Ar script2 Ns ,… Oc +Run a script (or scripts) on each vdev and include the output as a new column +in the +.Nm zpool Cm status +output. +See the +.Fl c +option of +.Nm zpool Cm iostat +for complete details. +.It Fl D +Display a histogram of deduplication statistics, showing the allocated +.Pq physically present on disk +and referenced +.Pq logically referenced in the pool +block counts and sizes by reference count. +If repeated, (-DD), also shows statistics on how much of the DDT is resident +in the ARC. +.It Fl d +Display the number of Direct I/O read/write checksum verify errors that have +occurred on a top-level VDEV. +See +.Sx zfs_vdev_direct_write_verify +in +.Xr zfs 4 +for details about the conditions that can cause Direct I/O write checksum +verify failures to occur. +Direct I/O reads checksum verify errors can also occur if the contents of the +buffer are being manipulated after the I/O has been issued and is in flight. +In the case of Direct I/O read checksum verify errors, the I/O will be reissued +through the ARC. +.It Fl e +Only show unhealthy vdevs (not-ONLINE or with errors). +.It Fl g +Display vdev GUIDs instead of the normal device names +These GUIDs can be used in place of device names for the zpool +detach/offline/remove/replace commands. +.It Fl i +Display vdev initialization status. +.It Fl j , -json Oo Ns Fl -json-flat-vdevs Oc Oo Ns Fl -json-int Oc \ +Oo Ns Fl -json-pool-key-guid Oc +Display the status for ZFS pools in JSON format. +Specify +.Sy --json-flat-vdevs +to display vdevs in flat hierarchy instead of nested vdev objects. +Specify +.Sy --json-int +to display numbers in integer format instead of strings. +Specify +.Sy --json-pool-key-guid +to set pool GUID as key for pool objects instead of pool names. +.It Fl L +Display real paths for vdevs resolving all symbolic links. +This can be used to look up the current block device name regardless of the +.Pa /dev/disk/ +path used to open it. +.It Fl P +Display full paths for vdevs instead of only the last component of +the path. +This can be used in conjunction with the +.Fl L +flag. +.It Fl p +Display numbers in parsable (exact) values. +.It Fl -power +Display vdev enclosure slot power status (on or off). +.It Fl s +Display the number of leaf vdev slow I/O operations. +This is the number of I/O operations that didn't complete in +.Sy zio_slow_io_ms +milliseconds +.Pq Sy 30000 No by default . +This does not necessarily mean the I/O operations failed to complete, just took +an +unreasonably long amount of time. +This may indicate a problem with the underlying storage. +.It Fl T Sy d Ns | Ns Sy u +Display a time stamp. +Specify +.Sy d +for standard date format. +See +.Xr date 1 . +Specify +.Sy u +for a printed representation of the internal representation of time. +See +.Xr time 1 . +.It Fl t +Display vdev TRIM status. +.It Fl v +Displays verbose data error information, printing out a complete list of all +data errors since the last complete pool scrub. +If the head_errlog feature is enabled and files containing errors have been +removed then the respective filenames will not be reported in subsequent runs +of this command. +.It Fl x +Only display status for pools that are exhibiting errors or are otherwise +unavailable. +Warnings about pools not using the latest on-disk format will not be included. +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 16 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Adding output columns +Additional columns can be added to the +.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c . +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size + NAME STATE READ WRITE CKSUM vendor model size + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + +.No # Nm zpool Cm iostat Fl vc Pa size + capacity operations bandwidth +pool alloc free read write read write size +---------- ----- ----- ----- ----- ----- ----- ---- +rpool 14.6G 54.9G 4 55 250K 2.69M + sda1 14.6G 54.9G 4 55 250K 2.69M 70G +---------- ----- ----- ----- ----- ----- ----- ---- +.Ed +. +.Ss Example 2 : No Display the status output in JSON format +.Nm zpool Cm status No can output in JSON format if +.Fl j +is specified. +.Fl c +can be used to run a script on each VDEV. +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm status Fl j Fl c Pa vendor , Ns Pa model , Ns Pa size | Nm jq +{ + "output_version": { + "command": "zpool status", + "vers_major": 0, + "vers_minor": 1 + }, + "pools": { + "tank": { + "name": "tank", + "state": "ONLINE", + "guid": "3920273586464696295", + "txg": "16597", + "spa_version": "5000", + "zpl_version": "5", + "status": "OK", + "vdevs": { + "tank": { + "name": "tank", + "alloc_space": "62.6G", + "total_space": "15.0T", + "def_space": "11.3T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vdevs": { + "raidz1-0": { + "name": "raidz1-0", + "vdev_type": "raidz", + "guid": "763132626387621737", + "state": "HEALTHY", + "alloc_space": "62.5G", + "total_space": "10.9T", + "def_space": "7.26T", + "rep_dev_size": "10.9T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vdevs": { + "ca1eb824-c371-491d-ac13-37637e35c683": { + "name": "ca1eb824-c371-491d-ac13-37637e35c683", + "vdev_type": "disk", + "guid": "12841765308123764671", + "path": "/dev/disk/by-partuuid/ca1eb824-c371-491d-ac13-37637e35c683", + "state": "HEALTHY", + "rep_dev_size": "3.64T", + "phys_space": "3.64T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vendor": "ATA", + "model": "WDC WD40EFZX-68AWUN0", + "size": "3.6T" + }, + "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7": { + "name": "97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7", + "vdev_type": "disk", + "guid": "1527839927278881561", + "path": "/dev/disk/by-partuuid/97cd98fb-8fb8-4ac4-bc84-bd8950a7ace7", + "state": "HEALTHY", + "rep_dev_size": "3.64T", + "phys_space": "3.64T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vendor": "ATA", + "model": "WDC WD40EFZX-68AWUN0", + "size": "3.6T" + }, + "e9ddba5f-f948-4734-a472-cb8aa5f0ff65": { + "name": "e9ddba5f-f948-4734-a472-cb8aa5f0ff65", + "vdev_type": "disk", + "guid": "6982750226085199860", + "path": "/dev/disk/by-partuuid/e9ddba5f-f948-4734-a472-cb8aa5f0ff65", + "state": "HEALTHY", + "rep_dev_size": "3.64T", + "phys_space": "3.64T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vendor": "ATA", + "model": "WDC WD40EFZX-68AWUN0", + "size": "3.6T" + } + } + } + } + } + }, + "dedup": { + "mirror-2": { + "name": "mirror-2", + "vdev_type": "mirror", + "guid": "2227766268377771003", + "state": "HEALTHY", + "alloc_space": "89.1M", + "total_space": "3.62T", + "def_space": "3.62T", + "rep_dev_size": "3.62T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vdevs": { + "db017360-d8e9-4163-961b-144ca75293a3": { + "name": "db017360-d8e9-4163-961b-144ca75293a3", + "vdev_type": "disk", + "guid": "17880913061695450307", + "path": "/dev/disk/by-partuuid/db017360-d8e9-4163-961b-144ca75293a3", + "state": "HEALTHY", + "rep_dev_size": "3.63T", + "phys_space": "3.64T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vendor": "ATA", + "model": "WDC WD40EFZX-68AWUN0", + "size": "3.6T" + }, + "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f": { + "name": "952c3baf-b08a-4a8c-b7fa-33a07af5fe6f", + "vdev_type": "disk", + "guid": "10276374011610020557", + "path": "/dev/disk/by-partuuid/952c3baf-b08a-4a8c-b7fa-33a07af5fe6f", + "state": "HEALTHY", + "rep_dev_size": "3.63T", + "phys_space": "3.64T", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vendor": "ATA", + "model": "WDC WD40EFZX-68AWUN0", + "size": "3.6T" + } + } + } + }, + "special": { + "25d418f8-92bd-4327-b59f-7ef5d5f50d81": { + "name": "25d418f8-92bd-4327-b59f-7ef5d5f50d81", + "vdev_type": "disk", + "guid": "3935742873387713123", + "path": "/dev/disk/by-partuuid/25d418f8-92bd-4327-b59f-7ef5d5f50d81", + "state": "HEALTHY", + "alloc_space": "37.4M", + "total_space": "444G", + "def_space": "444G", + "rep_dev_size": "444G", + "phys_space": "447G", + "read_errors": "0", + "write_errors": "0", + "checksum_errors": "0", + "vendor": "ATA", + "model": "Micron_5300_MTFDDAK480TDS", + "size": "447.1G" + } + }, + "error_count": "0" + } + } +} +.Ed +. +.Sh SEE ALSO +.Xr zpool-events 8 , +.Xr zpool-history 8 , +.Xr zpool-iostat 8 , +.Xr zpool-list 8 , +.Xr zpool-resilver 8 , +.Xr zpool-scrub 8 , +.Xr zpool-wait 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-sync.8 b/sys/contrib/openzfs/man/man8/zpool-sync.8 new file mode 100644 index 000000000000..d1dc05d0c202 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-sync.8 @@ -0,0 +1,54 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd July 11, 2022 +.Dt ZPOOL-SYNC 8 +.Os +. +.Sh NAME +.Nm zpool-sync +.Nd flush data to primary storage of ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm sync +.Oo Ar pool Oc Ns … +. +.Sh DESCRIPTION +This command forces all in-core dirty data to be written to the primary +pool storage and not the ZIL. +It will also update administrative information including quota reporting. +Without arguments, +.Nm zpool Cm sync +will sync all pools on the system. +Otherwise, it will sync only the specified pools. +. +.Sh SEE ALSO +.Xr zpoolconcepts 7 , +.Xr zpool-export 8 , +.Xr zpool-iostat 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-trim.8 b/sys/contrib/openzfs/man/man8/zpool-trim.8 new file mode 100644 index 000000000000..c4e849019789 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-trim.8 @@ -0,0 +1,118 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2025 Hewlett Packard Enterprise Development LP. +.\" +.Dd July 30, 2025 +.Dt ZPOOL-TRIM 8 +.Os +. +.Sh NAME +.Nm zpool-trim +.Nd initiate TRIM of free space in ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm trim +.Op Fl dw +.Op Fl r Ar rate +.Op Fl c Ns | Ns Fl s +.Fl a Ns | Ns Ar pool +.Oo Ar device Ns Oc Ns … +. +.Sh DESCRIPTION +Initiates an immediate on-demand TRIM operation for all of the free space in +a pool. +This operation informs the underlying storage devices of all blocks +in the pool which are no longer allocated and allows thinly provisioned +devices to reclaim the space. +.Pp +A manual on-demand TRIM operation can be initiated irrespective of the +.Sy autotrim +pool property setting. +See the documentation for the +.Sy autotrim +property above for the types of vdev devices which can be trimmed. +.Bl -tag -width Ds +.It Fl a , -all +Perform TRIM operation on +all +pools. +.It Fl d , -secure +Causes a secure TRIM to be initiated. +When performing a secure TRIM, the +device guarantees that data stored on the trimmed blocks has been erased. +This requires support from the device and is not supported by all SSDs. +.It Fl r , -rate Ar rate +Controls the rate at which the TRIM operation progresses. +Without this +option TRIM is executed as quickly as possible. +The rate, expressed in bytes +per second, is applied on a per-vdev basis and may be set differently for +each leaf vdev. +.It Fl c , -cancel +Cancel trimming on the specified devices, or all eligible devices if none +are specified. +If one or more target devices are invalid or are not currently being +trimmed, the command will fail and no cancellation will occur on any device. +.It Fl s , -suspend +Suspend trimming on the specified devices, or all eligible devices if none +are specified. +If one or more target devices are invalid or are not currently being +trimmed, the command will fail and no suspension will occur on any device. +Trimming can then be resumed by running +.Nm zpool Cm trim +with no flags on the relevant target devices. +.It Fl w , -wait +Wait until the devices are done being trimmed before returning. +.El +.Sh PERIODIC TRIM +On machines using systemd, trim timers can be enabled on a per-pool basis. +.Nm weekly +and +.Nm monthly +timer units are provided. +.Bl -tag -width Ds +.It Xo +.Xc +.Nm systemctl +.Cm enable +.Cm zfs-trim-\fIweekly\fB@\fIrpool\fB.timer +.Cm --now +.It Xo +.Xc +.Nm systemctl +.Cm enable +.Cm zfs-trim-\fImonthly\fB@\fIotherpool\fB.timer +.Cm --now +.El +. +.Sh SEE ALSO +.Xr systemd.timer 5 , +.Xr zpoolprops 7 , +.Xr zpool-initialize 8 , +.Xr zpool-wait 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-upgrade.8 b/sys/contrib/openzfs/man/man8/zpool-upgrade.8 new file mode 100644 index 000000000000..cf69060da5ce --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-upgrade.8 @@ -0,0 +1,122 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" Copyright (c) 2021, Colm Buckley <colm@tuatha.org> +.\" +.Dd July 11, 2022 +.Dt ZPOOL-UPGRADE 8 +.Os +. +.Sh NAME +.Nm zpool-upgrade +.Nd manage version and feature flags of ZFS storage pools +.Sh SYNOPSIS +.Nm zpool +.Cm upgrade +.Nm zpool +.Cm upgrade +.Fl v +.Nm zpool +.Cm upgrade +.Op Fl V Ar version +.Fl a Ns | Ns Ar pool Ns … +. +.Sh DESCRIPTION +.Bl -tag -width Ds +.It Xo +.Nm zpool +.Cm upgrade +.Xc +Displays pools which do not have all supported features enabled and pools +formatted using a legacy ZFS version number. +These pools can continue to be used, but some features may not be available. +Use +.Nm zpool Cm upgrade Fl a +to enable all features on all pools (subject to the +.Fl o Sy compatibility +property). +.It Xo +.Nm zpool +.Cm upgrade +.Fl v +.Xc +Displays legacy ZFS versions supported by the this version of ZFS. +See +.Xr zpool-features 7 +for a description of feature flags features supported by this version of ZFS. +.It Xo +.Nm zpool +.Cm upgrade +.Op Fl V Ar version +.Fl a Ns | Ns Ar pool Ns … +.Xc +Enables all supported features on the given pool. +.Pp +If the pool has specified compatibility feature sets using the +.Fl o Sy compatibility +property, only the features present in all requested compatibility sets will be +enabled. +If this property is set to +.Ar legacy +then no upgrade will take place. +.Pp +Once this is done, the pool will no longer be accessible on systems that do not +support feature flags. +See +.Xr zpool-features 7 +for details on compatibility with systems that support feature flags, but do not +support all features enabled on the pool. +.Bl -tag -width Ds +.It Fl a +Enables all supported features (from specified compatibility sets, if any) on +all +pools. +.It Fl V Ar version +Upgrade to the specified legacy version. +If specified, no features will be enabled on the pool. +This option can only be used to increase the version number up to the last +supported legacy version number. +.El +.El +. +.Sh EXAMPLES +.\" These are, respectively, examples 10 from zpool.8 +.\" Make sure to update them bidirectionally +.Ss Example 1 : No Upgrading All ZFS Storage Pools to the Current Version +The following command upgrades all ZFS Storage pools to the current version of +the software: +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm upgrade Fl a +This system is currently running ZFS version 2. +.Ed +. +.Sh SEE ALSO +.Xr zpool-features 7 , +.Xr zpoolconcepts 7 , +.Xr zpoolprops 7 , +.Xr zpool-history 8 diff --git a/sys/contrib/openzfs/man/man8/zpool-wait.8 b/sys/contrib/openzfs/man/man8/zpool-wait.8 new file mode 100644 index 000000000000..28a51d29a913 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool-wait.8 @@ -0,0 +1,119 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2021 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd January 29, 2024 +.Dt ZPOOL-WAIT 8 +.Os +. +.Sh NAME +.Nm zpool-wait +.Nd wait for activity to stop in a ZFS storage pool +.Sh SYNOPSIS +.Nm zpool +.Cm wait +.Op Fl Hp +.Op Fl T Sy u Ns | Ns Sy d +.Op Fl t Ar activity Ns Oo , Ns Ar activity Ns Oc Ns … +.Ar pool +.Op Ar interval +. +.Sh DESCRIPTION +Waits until all background activity of the given types has ceased in the given +pool. +The activity could cease because it has completed, or because it has been +paused or canceled by a user, or because the pool has been exported or +destroyed. +If no activities are specified, the command waits until background activity of +every type listed below has ceased. +If there is no activity of the given types in progress, the command returns +immediately. +.Pp +These are the possible values for +.Ar activity , +along with what each one waits for: +.Bl -tag -compact -offset Ds -width "raidz_expand" +.It Sy discard +Checkpoint to be discarded +.It Sy free +.Sy freeing +property to become +.Sy 0 +.It Sy initialize +All initializations to cease +.It Sy replace +All device replacements to cease +.It Sy remove +Device removal to cease +.It Sy resilver +Resilver to cease +.It Sy scrub +Scrub to cease +.It Sy trim +Manual trim to cease +.It Sy raidz_expand +Attaching to a RAID-Z vdev to complete +.El +.Pp +If an +.Ar interval +is provided, the amount of work remaining, in bytes, for each activity is +printed every +.Ar interval +seconds. +.Bl -tag -width Ds +.It Fl H +Scripted mode. +Do not display headers, and separate fields by a single tab instead of arbitrary +space. +.It Fl p +Display numbers in parsable (exact) values. +.It Fl T Sy u Ns | Ns Sy d +Display a time stamp. +Specify +.Sy u +for a printed representation of the internal representation of time. +See +.Xr time 1 . +Specify +.Sy d +for standard date format. +See +.Xr date 1 . +.El +. +.Sh SEE ALSO +.Xr zpool-checkpoint 8 , +.Xr zpool-initialize 8 , +.Xr zpool-remove 8 , +.Xr zpool-replace 8 , +.Xr zpool-resilver 8 , +.Xr zpool-scrub 8 , +.Xr zpool-status 8 , +.Xr zpool-trim 8 diff --git a/sys/contrib/openzfs/man/man8/zpool.8 b/sys/contrib/openzfs/man/man8/zpool.8 new file mode 100644 index 000000000000..3bfef780b298 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool.8 @@ -0,0 +1,657 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2007, Sun Microsystems, Inc. All Rights Reserved. +.\" Copyright (c) 2012, 2018 by Delphix. All rights reserved. +.\" Copyright (c) 2012 Cyril Plisko. All Rights Reserved. +.\" Copyright (c) 2017 Datto Inc. +.\" Copyright (c) 2018 George Melikov. All Rights Reserved. +.\" Copyright 2017 Nexenta Systems, Inc. +.\" Copyright (c) 2017 Open-E, Inc. All Rights Reserved. +.\" +.Dd November 19, 2024 +.Dt ZPOOL 8 +.Os +. +.Sh NAME +.Nm zpool +.Nd configure ZFS storage pools +.Sh SYNOPSIS +.Nm +.Fl ?V +.Nm +.Cm version +.Op Fl j +.Nm +.Cm subcommand +.Op Ar arguments +. +.Sh DESCRIPTION +The +.Nm +command configures ZFS storage pools. +A storage pool is a collection of devices that provides physical storage and +data replication for ZFS datasets. +All datasets within a storage pool share the same space. +See +.Xr zfs 8 +for information on managing datasets. +.Pp +For an overview of creating and managing ZFS storage pools see the +.Xr zpoolconcepts 7 +manual page. +. +.Sh SUBCOMMANDS +All subcommands that modify state are logged persistently to the pool in their +original form. +.Pp +The +.Nm +command provides subcommands to create and destroy storage pools, add capacity +to storage pools, and provide information about the storage pools. +The following subcommands are supported: +.Bl -tag -width Ds +.It Xo +.Nm +.Fl ?\& +.Xc +Displays a help message. +.It Xo +.Nm +.Fl V , -version +.Xc +.It Xo +.Nm +.Cm version +.Op Fl j +.Xc +Displays the software version of the +.Nm +userland utility and the ZFS kernel module. +Use +.Fl j +option to output in JSON format. +.El +. +.Ss Creation +.Bl -tag -width Ds +.It Xr zpool-create 8 +Creates a new storage pool containing the virtual devices specified on the +command line. +.It Xr zpool-initialize 8 +Begins initializing by writing to all unallocated regions on the specified +devices, or all eligible devices in the pool if no individual devices are +specified. +.El +. +.Ss Destruction +.Bl -tag -width Ds +.It Xr zpool-destroy 8 +Destroys the given pool, freeing up any devices for other use. +.It Xr zpool-labelclear 8 +Removes ZFS label information from the specified +.Ar device . +.El +. +.Ss Virtual Devices +.Bl -tag -width Ds +.It Xo +.Xr zpool-attach 8 Ns / Ns Xr zpool-detach 8 +.Xc +Converts a non-redundant disk into a mirror, or increases +the redundancy level of an existing mirror +.Cm ( attach Ns ), or performs the inverse operation ( +.Cm detach Ns ). +.It Xo +.Xr zpool-add 8 Ns / Ns Xr zpool-remove 8 +.Xc +Adds the specified virtual devices to the given pool, +or removes the specified device from the pool. +.It Xr zpool-replace 8 +Replaces an existing device (which may be faulted) with a new one. +.It Xr zpool-split 8 +Creates a new pool by splitting all mirrors in an existing pool (which decreases +its redundancy). +.El +. +.Ss Properties +Available pool properties listed in the +.Xr zpoolprops 7 +manual page. +.Bl -tag -width Ds +.It Xr zpool-list 8 +Lists the given pools along with a health status and space usage. +.It Xo +.Xr zpool-get 8 Ns / Ns Xr zpool-set 8 +.Xc +Retrieves the given list of properties +.Po +or all properties if +.Sy all +is used +.Pc +for the specified storage pool(s). +.El +. +.Ss Monitoring +.Bl -tag -width Ds +.It Xr zpool-status 8 +Displays the detailed health status for the given pools. +.It Xr zpool-iostat 8 +Displays logical I/O statistics for the given pools/vdevs. +Physical I/O operations may be observed via +.Xr iostat 1 . +.It Xr zpool-events 8 +Lists all recent events generated by the ZFS kernel modules. +These events are consumed by the +.Xr zed 8 +and used to automate administrative tasks such as replacing a failed device +with a hot spare. +That manual page also describes the subclasses and event payloads +that can be generated. +.It Xr zpool-history 8 +Displays the command history of the specified pool(s) or all pools if no pool is +specified. +.El +. +.Ss Maintenance +.Bl -tag -width Ds +.It Xr zpool-prefetch 8 +Prefetches specific types of pool data. +.It Xr zpool-scrub 8 +Begins a scrub or resumes a paused scrub. +.It Xr zpool-checkpoint 8 +Checkpoints the current state of +.Ar pool , +which can be later restored by +.Nm zpool Cm import Fl -rewind-to-checkpoint . +.It Xr zpool-trim 8 +Initiates an immediate on-demand TRIM operation for all of the free space in a +pool. +This operation informs the underlying storage devices of all blocks +in the pool which are no longer allocated and allows thinly provisioned +devices to reclaim the space. +.It Xr zpool-sync 8 +This command forces all in-core dirty data to be written to the primary +pool storage and not the ZIL. +It will also update administrative information including quota reporting. +Without arguments, +.Nm zpool Cm sync +will sync all pools on the system. +Otherwise, it will sync only the specified pool(s). +.It Xr zpool-upgrade 8 +Manage the on-disk format version of storage pools. +.It Xr zpool-wait 8 +Waits until all background activity of the given types has ceased in the given +pool. +.El +. +.Ss Fault Resolution +.Bl -tag -width Ds +.It Xo +.Xr zpool-offline 8 Ns / Ns Xr zpool-online 8 +.Xc +Takes the specified physical device offline or brings it online. +.It Xr zpool-resilver 8 +Starts a resilver. +If an existing resilver is already running it will be restarted from the +beginning. +.It Xr zpool-reopen 8 +Reopen all the vdevs associated with the pool. +.It Xr zpool-clear 8 +Clears device errors in a pool. +.El +. +.Ss Import & Export +.Bl -tag -width Ds +.It Xr zpool-import 8 +Make disks containing ZFS storage pools available for use on the system. +.It Xr zpool-export 8 +Exports the given pools from the system. +.It Xr zpool-reguid 8 +Generates a new unique identifier for the pool. +.El +. +.Sh EXIT STATUS +The following exit values are returned: +.Bl -tag -compact -offset 4n -width "a" +.It Sy 0 +Successful completion. +.It Sy 1 +An error occurred. +.It Sy 2 +Invalid command line options were specified. +.El +. +.Sh EXAMPLES +.\" Examples 1, 2, 3, 4, 12, 13 are shared with zpool-create.8. +.\" Examples 6, 14 are shared with zpool-add.8. +.\" Examples 7, 16 are shared with zpool-list.8. +.\" Examples 8 are shared with zpool-destroy.8. +.\" Examples 9 are shared with zpool-export.8. +.\" Examples 10 are shared with zpool-import.8. +.\" Examples 11 are shared with zpool-upgrade.8. +.\" Examples 15 are shared with zpool-remove.8. +.\" Examples 17 are shared with zpool-status.8. +.\" Examples 14, 17 are also shared with zpool-iostat.8. +.\" Make sure to update them omnidirectionally +.Ss Example 1 : No Creating a RAID-Z Storage Pool +The following command creates a pool with a single raidz root vdev that +consists of six disks: +.Dl # Nm zpool Cm create Ar tank Sy raidz Pa sda sdb sdc sdd sde sdf +. +.Ss Example 2 : No Creating a Mirrored Storage Pool +The following command creates a pool with two mirrors, where each mirror +contains two disks: +.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy mirror Pa sdc sdd +. +.Ss Example 3 : No Creating a ZFS Storage Pool by Using Partitions +The following command creates a non-redundant pool using two disk partitions: +.Dl # Nm zpool Cm create Ar tank Pa sda1 sdb2 +. +.Ss Example 4 : No Creating a ZFS Storage Pool by Using Files +The following command creates a non-redundant pool using files. +While not recommended, a pool based on files can be useful for experimental +purposes. +.Dl # Nm zpool Cm create Ar tank Pa /path/to/file/a /path/to/file/b +. +.Ss Example 5 : No Making a non-mirrored ZFS Storage Pool mirrored +The following command converts an existing single device +.Ar sda +into a mirror by attaching a second device to it, +.Ar sdb . +.Dl # Nm zpool Cm attach Ar tank Pa sda sdb +. +.Ss Example 6 : No Adding a Mirror to a ZFS Storage Pool +The following command adds two mirrored disks to the pool +.Ar tank , +assuming the pool is already made up of two-way mirrors. +The additional space is immediately available to any datasets within the pool. +.Dl # Nm zpool Cm add Ar tank Sy mirror Pa sda sdb +. +.Ss Example 7 : No Listing Available ZFS Storage Pools +The following command lists all available pools on the system. +In this case, the pool +.Ar zion +is faulted due to a missing device. +The results from this command are similar to the following: +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm list +NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT +rpool 19.9G 8.43G 11.4G - 33% 42% 1.00x ONLINE - +tank 61.5G 20.0G 41.5G - 48% 32% 1.00x ONLINE - +zion - - - - - - - FAULTED - +.Ed +. +.Ss Example 8 : No Destroying a ZFS Storage Pool +The following command destroys the pool +.Ar tank +and any datasets contained within: +.Dl # Nm zpool Cm destroy Fl f Ar tank +. +.Ss Example 9 : No Exporting a ZFS Storage Pool +The following command exports the devices in pool +.Ar tank +so that they can be relocated or later imported: +.Dl # Nm zpool Cm export Ar tank +. +.Ss Example 10 : No Importing a ZFS Storage Pool +The following command displays available pools, and then imports the pool +.Ar tank +for use on the system. +The results from this command are similar to the following: +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm import + pool: tank + id: 15451357997522795478 + state: ONLINE +action: The pool can be imported using its name or numeric identifier. +config: + + tank ONLINE + mirror ONLINE + sda ONLINE + sdb ONLINE + +.No # Nm zpool Cm import Ar tank +.Ed +. +.Ss Example 11 : No Upgrading All ZFS Storage Pools to the Current Version +The following command upgrades all ZFS Storage pools to the current version of +the software: +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm upgrade Fl a +This system is currently running ZFS version 2. +.Ed +. +.Ss Example 12 : No Managing Hot Spares +The following command creates a new pool with an available hot spare: +.Dl # Nm zpool Cm create Ar tank Sy mirror Pa sda sdb Sy spare Pa sdc +.Pp +If one of the disks were to fail, the pool would be reduced to the degraded +state. +The failed device can be replaced using the following command: +.Dl # Nm zpool Cm replace Ar tank Pa sda sdd +.Pp +Once the data has been resilvered, the spare is automatically removed and is +made available for use should another device fail. +The hot spare can be permanently removed from the pool using the following +command: +.Dl # Nm zpool Cm remove Ar tank Pa sdc +. +.Ss Example 13 : No Creating a ZFS Pool with Mirrored Separate Intent Logs +The following command creates a ZFS storage pool consisting of two, two-way +mirrors and mirrored log devices: +.Dl # Nm zpool Cm create Ar pool Sy mirror Pa sda sdb Sy mirror Pa sdc sdd Sy log mirror Pa sde sdf +. +.Ss Example 14 : No Adding Cache Devices to a ZFS Pool +The following command adds two disks for use as cache devices to a ZFS storage +pool: +.Dl # Nm zpool Cm add Ar pool Sy cache Pa sdc sdd +.Pp +Once added, the cache devices gradually fill with content from main memory. +Depending on the size of your cache devices, it could take over an hour for +them to fill. +Capacity and reads can be monitored using the +.Cm iostat +subcommand as follows: +.Dl # Nm zpool Cm iostat Fl v Ar pool 5 +. +.Ss Example 15 : No Removing a Mirrored top-level (Log or Data) Device +The following commands remove the mirrored log device +.Sy mirror-2 +and mirrored top-level data device +.Sy mirror-1 . +.Pp +Given this configuration: +.Bd -literal -compact -offset Ds + pool: tank + state: ONLINE + scrub: none requested +config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 + mirror-1 ONLINE 0 0 0 + sdc ONLINE 0 0 0 + sdd ONLINE 0 0 0 + logs + mirror-2 ONLINE 0 0 0 + sde ONLINE 0 0 0 + sdf ONLINE 0 0 0 +.Ed +.Pp +The command to remove the mirrored log +.Ar mirror-2 No is : +.Dl # Nm zpool Cm remove Ar tank mirror-2 +.Pp +At this point, the log device no longer exists +(both sides of the mirror have been removed): +.Bd -literal -compact -offset Ds + pool: tank + state: ONLINE + scan: none requested +config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 + mirror-1 ONLINE 0 0 0 + sdc ONLINE 0 0 0 + sdd ONLINE 0 0 0 +.Ed +.Pp +The command to remove the mirrored data +.Ar mirror-1 No is : +.Dl # Nm zpool Cm remove Ar tank mirror-1 +.Pp +After +.Ar mirror-1 No has been evacuated, the pool remains redundant, but +the total amount of space is reduced: +.Bd -literal -compact -offset Ds + pool: tank + state: ONLINE + scan: none requested +config: + + NAME STATE READ WRITE CKSUM + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + sda ONLINE 0 0 0 + sdb ONLINE 0 0 0 +.Ed +. +.Ss Example 16 : No Displaying expanded space on a device +The following command displays the detailed information for the pool +.Ar data . +This pool is comprised of a single raidz vdev where one of its devices +increased its capacity by 10 GiB. +In this example, the pool will not be able to utilize this extra capacity until +all the devices under the raidz vdev have been expanded. +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm list Fl v Ar data +NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT +data 23.9G 14.6G 9.30G - 48% 61% 1.00x ONLINE - + raidz1 23.9G 14.6G 9.30G - 48% + sda - - - - - + sdb - - - 10G - + sdc - - - - - +.Ed +. +.Ss Example 17 : No Adding output columns +Additional columns can be added to the +.Nm zpool Cm status No and Nm zpool Cm iostat No output with Fl c . +.Bd -literal -compact -offset Ds +.No # Nm zpool Cm status Fl c Pa vendor , Ns Pa model , Ns Pa size + NAME STATE READ WRITE CKSUM vendor model size + tank ONLINE 0 0 0 + mirror-0 ONLINE 0 0 0 + U1 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U10 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U11 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U12 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U13 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + U14 ONLINE 0 0 0 SEAGATE ST8000NM0075 7.3T + +.No # Nm zpool Cm iostat Fl vc Pa size + capacity operations bandwidth +pool alloc free read write read write size +---------- ----- ----- ----- ----- ----- ----- ---- +rpool 14.6G 54.9G 4 55 250K 2.69M + sda1 14.6G 54.9G 4 55 250K 2.69M 70G +---------- ----- ----- ----- ----- ----- ----- ---- +.Ed +. +.Sh ENVIRONMENT VARIABLES +.Bl -tag -compact -width "ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE" +.It Sy ZFS_ABORT +Cause +.Nm +to dump core on exit for the purposes of running +.Sy ::findleaks . +.It Sy ZFS_COLOR +Use ANSI color in +.Nm zpool Cm status +and +.Nm zpool Cm iostat +output. +.It Sy ZPOOL_AUTO_POWER_ON_SLOT +Automatically attempt to turn on the drives enclosure slot power to a drive when +running the +.Nm zpool Cm online +or +.Nm zpool Cm clear +commands. +This has the same effect as passing the +.Fl -power +option to those commands. +.It Sy ZPOOL_POWER_ON_SLOT_TIMEOUT_MS +The maximum time in milliseconds to wait for a slot power sysfs value +to return the correct value after writing it. +For example, after writing "on" to the sysfs enclosure slot power_control file, +it can take some time for the enclosure to power down the slot and return +"on" if you read back the 'power_control' value. +Defaults to 30 seconds (30000ms) if not set. +.It Sy ZPOOL_IMPORT_PATH +The search path for devices or files to use with the pool. +This is a colon-separated list of directories in which +.Nm +looks for device nodes and files. +Similar to the +.Fl d +option in +.Nm zpool import . +.It Sy ZPOOL_IMPORT_UDEV_TIMEOUT_MS +The maximum time in milliseconds that +.Nm zpool import +will wait for an expected device to be available. +.It Sy ZPOOL_STATUS_NON_NATIVE_ASHIFT_IGNORE +If set, suppress warning about non-native vdev ashift in +.Nm zpool Cm status . +The value is not used, only the presence or absence of the variable matters. +.It Sy ZPOOL_VDEV_NAME_GUID +Cause +.Nm +subcommands to output vdev GUIDs by default. +This behavior is identical to the +.Nm zpool Cm status Fl g +command line option. +.It Sy ZPOOL_VDEV_NAME_FOLLOW_LINKS +Cause +.Nm +subcommands to follow links for vdev names by default. +This behavior is identical to the +.Nm zpool Cm status Fl L +command line option. +.It Sy ZPOOL_VDEV_NAME_PATH +Cause +.Nm +subcommands to output full vdev path names by default. +This behavior is identical to the +.Nm zpool Cm status Fl P +command line option. +.It Sy ZFS_VDEV_DEVID_OPT_OUT +Older OpenZFS implementations had issues when attempting to display pool +config vdev names if a +.Sy devid +NVP value is present in the pool's config. +.Pp +For example, a pool that originated on illumos platform would have a +.Sy devid +value in the config and +.Nm zpool Cm status +would fail when listing the config. +This would also be true for future Linux-based pools. +.Pp +A pool can be stripped of any +.Sy devid +values on import or prevented from adding +them on +.Nm zpool Cm create +or +.Nm zpool Cm add +by setting +.Sy ZFS_VDEV_DEVID_OPT_OUT . +.Pp +.It Sy ZPOOL_SCRIPTS_AS_ROOT +Allow a privileged user to run +.Nm zpool Cm status Ns / Ns Cm iostat Fl c . +Normally, only unprivileged users are allowed to run +.Fl c . +.It Sy ZPOOL_SCRIPTS_PATH +The search path for scripts when running +.Nm zpool Cm status Ns / Ns Cm iostat Fl c . +This is a colon-separated list of directories and overrides the default +.Pa ~/.zpool.d +and +.Pa /etc/zfs/zpool.d +search paths. +.It Sy ZPOOL_SCRIPTS_ENABLED +Allow a user to run +.Nm zpool Cm status Ns / Ns Cm iostat Fl c . +If +.Sy ZPOOL_SCRIPTS_ENABLED +is not set, it is assumed that the user is allowed to run +.Nm zpool Cm status Ns / Ns Cm iostat Fl c . +.\" Shared with zfs.8 +.It Sy ZFS_MODULE_TIMEOUT +Time, in seconds, to wait for +.Pa /dev/zfs +to appear. +Defaults to +.Sy 10 , +max +.Sy 600 Pq 10 minutes . +If +.Pf < Sy 0 , +wait forever; if +.Sy 0 , +don't wait. +.El +. +.Sh INTERFACE STABILITY +.Sy Evolving +. +.Sh SEE ALSO +.Xr zfs 4 , +.Xr zpool-features 7 , +.Xr zpoolconcepts 7 , +.Xr zpoolprops 7 , +.Xr zed 8 , +.Xr zfs 8 , +.Xr zpool-add 8 , +.Xr zpool-attach 8 , +.Xr zpool-checkpoint 8 , +.Xr zpool-clear 8 , +.Xr zpool-create 8 , +.Xr zpool-ddtprune 8 , +.Xr zpool-destroy 8 , +.Xr zpool-detach 8 , +.Xr zpool-events 8 , +.Xr zpool-export 8 , +.Xr zpool-get 8 , +.Xr zpool-history 8 , +.Xr zpool-import 8 , +.Xr zpool-initialize 8 , +.Xr zpool-iostat 8 , +.Xr zpool-labelclear 8 , +.Xr zpool-list 8 , +.Xr zpool-offline 8 , +.Xr zpool-online 8 , +.Xr zpool-prefetch 8 , +.Xr zpool-reguid 8 , +.Xr zpool-remove 8 , +.Xr zpool-reopen 8 , +.Xr zpool-replace 8 , +.Xr zpool-resilver 8 , +.Xr zpool-scrub 8 , +.Xr zpool-set 8 , +.Xr zpool-split 8 , +.Xr zpool-status 8 , +.Xr zpool-sync 8 , +.Xr zpool-trim 8 , +.Xr zpool-upgrade 8 , +.Xr zpool-wait 8 diff --git a/sys/contrib/openzfs/man/man8/zpool_influxdb.8 b/sys/contrib/openzfs/man/man8/zpool_influxdb.8 new file mode 100644 index 000000000000..9ee294dbbecc --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zpool_influxdb.8 @@ -0,0 +1,99 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at +.\" https://opensource.org/licenses/CDDL-1.0 +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright 2020 Richard Elling +.\" +.Dd May 26, 2021 +.Dt ZPOOL_INFLUXDB 8 +.Os +. +.Sh NAME +.Nm zpool_influxdb +.Nd collect ZFS pool statistics in InfluxDB line protocol format +.Sh SYNOPSIS +.Nm +.Op Fl e Ns | Ns Fl -execd +.Op Fl n Ns | Ns Fl -no-histogram +.Op Fl s Ns | Ns Fl -sum-histogram-buckets +.Op Fl t Ns | Ns Fl -tags Ar key Ns = Ns Ar value Ns Oo , Ns Ar key Ns = Ns Ar value Oc Ns … +.Op Ar pool +. +.Sh DESCRIPTION +.Nm +produces InfluxDB-line-protocol-compatible metrics from zpools. +Like the +.Nm zpool +command, +.Nm +reads the current pool status and statistics. +Unlike the +.Nm zpool +command which is intended for humans, +.Nm +formats the output in the InfluxDB line protocol. +The expected use is as a plugin to a +metrics collector or aggregator, such as Telegraf. +.Pp +By default, +.Nm +prints pool metrics and status in the InfluxDB line protocol format. +All pools are printed, similar to the +.Nm zpool Cm status +command. +Providing a pool name restricts the output to the named pool. +. +.Sh OPTIONS +.Bl -tag -width "-e, --execd" +.It Fl e , -execd +Run in daemon mode compatible with Telegraf's +.Nm execd +plugin. +In this mode, the pools are sampled every time a +newline appears on the standard input. +.It Fl n , -no-histogram +Do not print latency and I/O size histograms. +This can reduce the total +amount of data, but one should consider the value brought by the insights +that latency and I/O size distributions provide. +The resulting values +are suitable for graphing with Grafana's heatmap plugin. +.It Fl s , -sum-histogram-buckets +Accumulates bucket values. +By default, the values are not accumulated and the raw data appears as shown by +.Nm zpool Cm iostat . +This works well for Grafana's heatmap plugin. +Summing the buckets produces output similar to Prometheus histograms. +.It Fl t , Fl -tags Ar key Ns = Ns Ar value Ns Oo , Ns Ar key Ns = Ns Ar value Oc Ns … +Adds specified tags to the tag set. +No sanity checking is performed. +See the InfluxDB Line Protocol format documentation for details on escaping +special characters used in tags. +.It Fl h , -help +Print a usage summary. +.El +. +.Sh SEE ALSO +.Xr zpool-iostat 8 , +.Xr zpool-status 8 , +.Lk https://github.com/influxdata/influxdb "InfluxDB" , +.Lk https://github.com/influxdata/telegraf "Telegraf" , +.Lk https://grafana.com "Grafana" , +.Lk https://prometheus.io "Prometheus" diff --git a/sys/contrib/openzfs/man/man8/zstream.8 b/sys/contrib/openzfs/man/man8/zstream.8 new file mode 100644 index 000000000000..5b3d063bc4a5 --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zstream.8 @@ -0,0 +1,200 @@ +.\" SPDX-License-Identifier: CDDL-1.0 +.\" +.\" CDDL HEADER START +.\" +.\" The contents of this file are subject to the terms of the +.\" Common Development and Distribution License (the "License"). +.\" You may not use this file except in compliance with the License. +.\" +.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE +.\" or https://opensource.org/licenses/CDDL-1.0. +.\" See the License for the specific language governing permissions +.\" and limitations under the License. +.\" +.\" When distributing Covered Code, include this CDDL HEADER in each +.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. +.\" If applicable, add the following below this CDDL HEADER, with the +.\" fields enclosed by brackets "[]" replaced with your own identifying +.\" information: Portions Copyright [yyyy] [name of copyright owner] +.\" +.\" CDDL HEADER END +.\" +.\" Copyright (c) 2020 by Delphix. All rights reserved. +.\" +.Dd November 10, 2022 +.Dt ZSTREAM 8 +.Os +. +.Sh NAME +.Nm zstream +.Nd manipulate ZFS send streams +.Sh SYNOPSIS +.Nm +.Cm dump +.Op Fl Cvd +.Op Ar file +.Nm +.Cm decompress +.Op Fl v +.Op Ar object Ns Sy \&, Ns Ar offset Ns Op Sy \&, Ns Ar type Ns ... +.Nm +.Cm redup +.Op Fl v +.Ar file +.Nm +.Cm token +.Ar resume_token +.Nm +.Cm recompress +.Op Fl l Ar level +.Ar algorithm +. +.Sh DESCRIPTION +The +.Sy zstream +utility manipulates ZFS send streams output by the +.Sy zfs send +command. +.Bl -tag -width "" +.It Xo +.Nm +.Cm dump +.Op Fl Cvd +.Op Ar file +.Xc +Print information about the specified send stream, including headers and +record counts. +The send stream may either be in the specified +.Ar file , +or provided on standard input. +.Bl -tag -width "-D" +.It Fl C +Suppress the validation of checksums. +.It Fl v +Verbose. +Print metadata for each record. +.It Fl d +Dump data contained in each record. +Implies verbose. +.El +.Pp +The +.Nm zstreamdump +alias is provided for compatibility and is equivalent to running +.Nm +.Cm dump . +.It Xo +.Nm +.Cm token +.Ar resume_token +.Xc +Dumps zfs resume token information +.It Xo +.Nm +.Cm decompress +.Op Fl v +.Op Ar object Ns Sy \&, Ns Ar offset Ns Op Sy \&, Ns Ar type Ns ... +.Xc +Decompress selected records in a ZFS send stream provided on standard input, +when the compression type recorded in ZFS metadata may be incorrect. +Specify the object number and byte offset of each record that you wish to +decompress. +Optionally specify the compression type. +Valid compression types include +.Sy off , +.Sy gzip , +.Sy lz4 , +.Sy lzjb , +.Sy zstd , +and +.Sy zle . +The default is +.Sy lz4 . +Every record for that object beginning at that offset will be decompressed, if +possible. +It may not be possible, because the record may be corrupted in some but not +all of the stream's snapshots. +Specifying a compression type of +.Sy off +will change the stream's metadata accordingly, without attempting decompression. +This can be useful if the record is already uncompressed but the metadata +insists otherwise. +The repaired stream will be written to standard output. +.Bl -tag -width "-v" +.It Fl v +Verbose. +Print summary of decompressed records. +.El +.It Xo +.Nm +.Cm redup +.Op Fl v +.Ar file +.Xc +Deduplicated send streams can be generated by using the +.Nm zfs Cm send Fl D +command. +The ability to send deduplicated send streams is deprecated. +In the future, the ability to receive a deduplicated send stream with +.Nm zfs Cm receive +will be removed. +However, deduplicated send streams can still be received by utilizing +.Nm zstream Cm redup . +.Pp +The +.Nm zstream Cm redup +command is provided a +.Ar file +containing a deduplicated send stream, and outputs an equivalent +non-deduplicated send stream on standard output. +Therefore, a deduplicated send stream can be received by running: +.Dl # Nm zstream Cm redup Pa DEDUP_STREAM_FILE | Nm zfs Cm receive No … +.Bl -tag -width "-D" +.It Fl v +Verbose. +Print summary of converted records. +.El +.It Xo +.Nm +.Cm recompress +.Op Fl l Ar level +.Ar algorithm +.Xc +Recompresses a send stream, provided on standard input, using the provided +algorithm and optional level, and writes the modified stream to standard output. +All WRITE records in the send stream will be recompressed, unless they fail +to result in size reduction compared to being left uncompressed. +The provided algorithm can be any valid value to the +.Nm compress +property. +Note that encrypted send streams cannot be recompressed. +.Bl -tag -width "-l" +.It Fl l Ar level +Specifies compression level. +Only needed for algorithms where the level is not implied as part of the name +of the algorithm (e.g. gzip-3 does not require it, while zstd does, if a +non-default level is desired). +.El +.El +. +.Sh EXAMPLES +Heal a dataset that was corrupted due to OpenZFS bug #12762. +First, determine which records are corrupt. +That cannot be done automatically; it requires information beyond ZFS's +metadata. +If object +.Sy 128 +is corrupted at offset +.Sy 0 +and is compressed using +.Sy lz4 , +then run this command: +.Bd -literal +.No # Nm zfs Ar send Fl c Ar … | Nm zstream decompress Ar 128,0,lz4 | \ +Nm zfs recv Ar … +.Ed +.Sh SEE ALSO +.Xr zfs 8 , +.Xr zfs-receive 8 , +.Xr zfs-send 8 , +.Lk https://github.com/openzfs/zfs/issues/12762 diff --git a/sys/contrib/openzfs/man/man8/zstreamdump.8 b/sys/contrib/openzfs/man/man8/zstreamdump.8 new file mode 120000 index 000000000000..c6721daf11de --- /dev/null +++ b/sys/contrib/openzfs/man/man8/zstreamdump.8 @@ -0,0 +1 @@ +zstream.8
\ No newline at end of file |