aboutsummaryrefslogtreecommitdiff
path: root/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/sys
Commit message (Collapse)AuthorAgeFilesLines
* Merge OpenZFS support in to HEAD.Matt Macy2020-08-2591-16601/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The primary benefit is maintaining a completely shared code base with the community allowing FreeBSD to receive new features sooner and with less effort. I would advise against doing 'zpool upgrade' or creating indispensable pools using new features until this change has had a month+ to soak. Work on merging FreeBSD support in to what was at the time "ZFS on Linux" began in August 2018. I first publicly proposed transitioning FreeBSD to (new) OpenZFS on December 18th, 2018. FreeBSD support in OpenZFS was finally completed in December 2019. A CFT for downstreaming OpenZFS support in to FreeBSD was first issued on July 8th. All issues that were reported have been addressed or, for a couple of less critical matters there are pull requests in progress with OpenZFS. iXsystems has tested and dogfooded extensively internally. The TrueNAS 12 release is based on OpenZFS with some additional features that have not yet made it upstream. Improvements include: project quotas, encrypted datasets, allocation classes, vectorized raidz, vectorized checksums, various command line improvements, zstd compression. Thanks to those who have helped along the way: Ryan Moeller, Allan Jude, Zack Welch, and many others. Sponsored by: iXsystems, Inc. Differential Revision: https://reviews.freebsd.org/D25872 Notes: svn path=/head/; revision=364746
* zfs: add an option to the bootloader to rewind the ZFS checkpointMariusz Zaborski2020-08-181-1/+1
| | | | | | | | | | | | | | | | | | The checkpoints are another way of keeping the state of ZFS. During the rewind, the pool has to be exported. This makes checkpoints unusable when using ZFS as root. Add the option to rewind the ZFS checkpoint at the boot time. If checkpoint exists, a new option for rewinding a checkpoint will appear in the bootloader menu. We fully support boot environments. If the rewind option is selected, the boot loader will show a list of boot environments that existed before the checkpoint. Reviewed by: tsoome, allanjude, kevans (ok with high-level overview) Differential Revision: https://reviews.freebsd.org/D24920 Notes: svn path=/head/; revision=364355
* Fix linker error in libuutil with recent LLVMAlex Richardson2020-08-071-1/+1
| | | | | | | | | | | | | | | | Not marking the function as static can result in a linker error: undefined reference to __assfail [--no-allow-shlib-undefined] I noticed this error after updating our CHERI LLVM to the latest upstream LLVM HEAD revision. This change effectively reverts r329984 and marks dmu_buf_init_user as static (which keeps the GCC build happy). Reviewed By: #zfs, asomers, freqlabs, mav Differential Revision: https://reviews.freebsd.org/D25663 Notes: svn path=/head/; revision=364027
* MFOpenZFS: Add support for boot environment data to be stored in the labelToomas Soome2020-08-052-4/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We are building new bootonce mechanism (previously zfs bootnext) and it is based on this OpenZFS change. Since this patch is nicely self contained, I am commiting it as is, and we can stack our changes. Original patch description follows: Modern bootloaders leverage data stored in the root filesystem to enable some of their powerful features. GRUB specifically has a grubenv file which can store large amounts of configuration data that can be read and written at boot time and during normal operation. This allows sysadmins to configure useful features like automated failover after failed boot attempts. Unfortunately, due to the Copy-on-Write nature of ZFS, the standard behavior of these tools cannot handle writing to ZFS files safely at boot time. We need an alternative way to store data that allows the bootloader to make changes to the data. This work is very similar to work that was done on Illumos to enable similar functionality in the FreeBSD bootloader. This patch is different in that the data being stored is a raw grubenv file; this file can store arbitrary variables and values, and the scripting provided by grub is powerful enough that special structures are not required to implement advanced behavior. We repurpose the second padding area in each label to store the grubenv file, protected by an embedded checksum. We add two ioctls to get and set this data, and libzfs_core and libzfs functions to access them more easily. There are no direct command line interfaces to these functions; these will be added directly to the bootloader utilities. Reviewed-by: Pavel Zakharov <pavel.zakharov@delphix.com> Reviewed-by: Matthew Ahrens <mahrens@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Paul Dagnelie <pcd@delphix.com> Closes #10009 Obtained from: OpenZFS Sponsored by: Netflix, Klara Inc. Notes: svn path=/head/; revision=363911
* zfs: add support for lockless lookupMateusz Guzik2020-07-251-0/+2
| | | | | | | | Tested by: pho (in a patchset, previous version) Differential Revision: https://reviews.freebsd.org/D25581 Notes: svn path=/head/; revision=363522
* rework how ZVOLs are updated in response to DSL operationsAndriy Gapon2020-06-112-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With this change all ZVOL updates are initiated from the SPA sync context instead of a mix of the sync and open contexts. The updates are queued to be applied by a dedicated thread in the original order. This should ensure that ZVOLs always accurately reflect the corresponding datasets. ZFS ioctl operations wait on the mentioned thread to complete its work. Thus, the illusion of the synchronous ZVOL update is preserved. At the same time, the SPA sync thread never blocks on ZVOL related operations avoiding problems like reported in bug 203864. This change is based on earlier work in the same direction: D7179 and D14669 by Anthoine Bourgeois. D7179 tried to perform ZVOL operations in the open context and that opened races between them. D14669 uses a design very similar to this change but with different implementation details. This change also heavily borrows from similar code in ZoL, but there are many differences too. See: - https://github.com/zfsonlinux/zfs/commit/a0bd735adb1b1eb81fef10b4db102ee051c4d4ff - https://github.com/zfsonlinux/zfs/issues/3681 - https://github.com/zfsonlinux/zfs/issues/2217 PR: 203864 MFC after: 5 weeks Sponsored by: CyberSecure Differential Revision: https://reviews.freebsd.org/D23478 Notes: svn path=/head/; revision=362047
* Don't block on the range lock in zfs_getpages().Mark Johnston2020-05-201-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | After r358443 the vnode object lock no longer synchronizes concurrent zfs_getpages() and zfs_write() (which must update vnode pages to maintain coherence). This created a potential deadlock between ZFS range locks and VM page busy locks: a fault on a mapped file will cause the fault page to be busied, after which zfs_getpages() locks a range around the file offset in order to map adjacent, resident pages; zfs_write() locks the range first, and then must busy vnode pages when synchronizing. Solve this by adding a non-blocking mode for ZFS range locks, and using it in zfs_getpages(). If zfs_getpages() fails to acquire the range lock, only the fault page will be populated. Reported by: bdrewery Reviewed by: avg Tested by: pho Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D24839 Notes: svn path=/head/; revision=361287
* MFOpenZFS: make zil max block size tunableAlexander Motin2020-03-192-24/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've observed that on some highly fragmented pools, most metaslab allocations are small (~2-8KB), but there are some large, 128K allocations. The large allocations are for ZIL blocks. If there is a lot of fragmentation, the large allocations can be hard to satisfy. The most common impact of this is that we need to check (and thus load) lots of metaslabs from the ZIL allocation code path, causing sync writes to wait for metaslabs to load, which can take a second or more. In the worst case, we may not be able to satisfy the allocation, in which case the ZIL will resort to txg_wait_synced() to ensure the change is on disk. To provide a workaround for this, this change adds a tunable that can reduce the size of ZIL blocks. External-issue: DLPX-61719 Reviewed-by: George Wilson <george.wilson@delphix.com> Reviewed-by: Paul Dagnelie <pcd@delphix.com> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Matthew Ahrens <mahrens@delphix.com> Closes #8865 openzfs/zfs@b8738257c2607c73c731ce8e0fd73282b266d6ef MFC after: 2 weeks Notes: svn path=/head/; revision=359112
* Remove duplicate dbufs accounting.Alexander Motin2020-02-071-0/+7
| | | | | | | | | | | | | | | | | Since AVL already has embedded element counter, use dn_dbufs_count only for dbufs not counted there (bonus buffers) and just add them. This removes two atomics per dbuf life cycle. According to profiler it reduces time spent by dbuf_destroy() inside bottlenecked dbuf_evict_thread() from 13.36% to 9.20% of the core. This counter is used only on illumos, so for FreeBSD it was just a waste of time. MFC after: 2 weeks Notes: svn path=/head/; revision=357657
* Reduce number of atomic_add() calls in aggsum.Alexander Motin2020-02-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | Previous code used 4 atomics to do aggsum_flush_bucket() and 2 more to re-borrow after the flush. But since asc_borrowed and asc_delta are accessed only while holding asc_lock, it makes no any sense to modify as_lower_bound and as_upper_bound in multiple steps. Instead of that the new code uses only 2 atomics in all the cases, one per as_*_bound variable. I think even that is overkill, simple atomic store and load could be used here, since all modifications are done under the as_lock, but there are no such primitives in ZFS code now. While there, make borrow code consider previous borrow value, so that on mixed request patterns reduce chance of needing to borrow again if much larger request follows tiny one that needed borrow. Also reduce as_numbuckets from uint64_t to u_int. It makes no sense to use so large division operation on every aggsum_add(). Reviewed by: Brian Behlendorf, Paul Dagnelie MFC after: 2 weeks Sponsored by: iXsystems, Inc. Notes: svn path=/head/; revision=357639
* Few microoptimizations to dbuf layer.Alexander Motin2020-02-041-6/+7
| | | | | | | | | | | | | | | | | | | | | Move db_link into the same cache line as db_blkid and db_level. It allows significantly reduce avl_add() time in dbuf_create() on systems with large RAM and huge number of dbufs per dnode. Avoid few accesses to dbuf_caches[].size, which is highly congested under high IOPS and never stays in cache for a long time. Use local value we are receiving from zfs_refcount_add_many() any way. Remove cache_size_bytes_max bump from dbuf_evict_one(). I don't see a point to do it on dbuf eviction after we done it on insertion in dbuf_rele_and_unlock(). Reviewed by: mahrens, Brian Behlendorf MFC after: 2 weeks Sponsored by: iXsystems, Inc. Notes: svn path=/head/; revision=357502
* zfs: ZFS_WLOCK_TEARDOWN_INACTIVE_WLOCKED -> ZFS_TEARDOWN_INACTIVE_WLOCKEDMateusz Guzik2020-02-011-1/+1
| | | | | | | Fix up the argument used in one case as well. Notes: svn path=/head/; revision=357357
* zfs: convert z_teardown_inactive_lock to sleepable read-mostly lockMateusz Guzik2020-01-311-7/+8
| | | | | | | | | | | | This eliminates a global serialisation point. It only gets write locked on unmount. Sample result doing an incremental -j 40 build: before: 173.30s user 458.97s system 2595% cpu 24.358 total after: 168.58s user 254.92s system 2211% cpu 19.147 total Notes: svn path=/head/; revision=357322
* zfs: provide macros to handle z_teardown_inactive_lockMateusz Guzik2020-01-311-0/+18
| | | | Notes: svn path=/head/; revision=357321
* zfs: fix spurious lock contention during path lookupMateusz Guzik2020-01-301-0/+3
| | | | | | | | | | ZFS tracks if anything denies VEXEC to allow for a quick check for the common case of path traversal. Use it. Differential Revision: https://reviews.freebsd.org/D22224 Notes: svn path=/head/; revision=357282
* Map ECKSUM and EFRAGS from ZFS onto real errnos.Alexander Motin2020-01-131-5/+4
| | | | | | | | | | | | | | | | | Make it less confusing when, for example, stat sets errno to 122 because a checksum failed in ZFS: Before: getfacl: /foo/bar: stat() failed: Unknown error: 122 After: getfacl: /foo/bar: stat() failed: Integrity check failed Submitted by: Ryan Moeller <ryan@ixsystems.com> Reviewed by: mckusick, mav MFC after: 2 weeks Sponsored by: iXsystems, Inc. Differential Revision: https://reviews.freebsd.org/D22973 Notes: svn path=/head/; revision=356707
* Use a callout instead of timeout(9) for delayed zio's.John Baldwin2019-12-131-0/+3
| | | | | | | | Reviewed by: avg Differential Revision: https://reviews.freebsd.org/D22597 Notes: svn path=/head/; revision=355726
* MFV r354383: 10592 misc. metaslab and vdev related ZoL bug fixesAndriy Gapon2019-11-215-25/+102
| | | | | | | | | | | | | | | | | | | | | | illumos/illumos-gate@555d674d5d4b8191dc83723188349d28278b2431 https://github.com/illumos/illumos-gate/commit/555d674d5d4b8191dc83723188349d28278b2431 https://www.illumos.org/issues/10592 This is a collection of recent fixes from ZoL: 8eef997679b Error path in metaslab_load_impl() forgets to drop ms_sync_lock 928e8ad47d3 Introduce auxiliary metaslab histograms 425d3237ee8 Get rid of space_map_update() for ms_synced_length 6c926f426a2 Simplify log vdev removal code 21e7cf5da89 zdb -L should skip leak detection altogether df72b8bebe0 Rename range_tree_verify to range_tree_verify_not_present 75058f33034 Remove unused vdev_t fields Portions contributed by: Jerry Jelinek <jerry.jelinek@joyent.com> Author: Serapheim Dimitropoulos <serapheim@delphix.com> MFC after: 4 weeks Notes: svn path=/head/; revision=354948
* MFV r354382,r354385: 10601 10757 Pool allocation classesAndriy Gapon2019-11-218-4/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | illumos/illumos-gate@663207adb1669640c01c5ec6949ce78fd806efae https://github.com/illumos/illumos-gate/commit/663207adb1669640c01c5ec6949ce78fd806efae 10601 Pool allocation classes https://www.illumos.org/issues/10601 illumos port of ZoL Pool allocation classes. Includes at least these two commits: 441709695 Pool allocation classes misplacing small file blocks cc99f275a Pool allocation classes 10757 Add -gLp to zpool subcommands for alt vdev names https://www.illumos.org/issues/10757 Port from ZoL of d2f3e292d Add -gLp to zpool subcommands for alt vdev names Note that a subsequent ZoL commit changed -p to -P a77f29f93 Change full path subcommand flag from -p to -P Portions contributed by: Jerry Jelinek <jerry.jelinek@joyent.com> Portions contributed by: HÃ¥kan Johansson <f96hajo@chalmers.se> Portions contributed by: Richard Yao <ryao@gentoo.org> Portions contributed by: Chunwei Chen <david.chen@nutanix.com> Portions contributed by: loli10K <ezomori.nozomu@gmail.com> Author: Don Brady <don.brady@delphix.com> 11541 allocation_classes feature must be enabled to add log device illumos/illumos-gate@c1064fd7ce62fe763a4475e9988ffea3b22137de https://github.com/illumos/illumos-gate/commit/c1064fd7ce62fe763a4475e9988ffea3b22137de https://www.illumos.org/issues/11541 After the allocation_classes feature was integrated, one can no longer add a log device to a pool unless that feature is enabled. There is an explicit check for this, but it is unnecessary in the case of log devices, so we should handle this better instead of forcing the feature to be enabled. Author: Jerry Jelinek <jerry.jelinek@joyent.com> FreeBSD notes. I faithfully added the new -g, -L, -P flags, but only -g does something: vdev GUIDs are displayed instead of device names. -L, resolve symlinks, and -P, display full disk paths, do nothing at the moment. The use of special vdevs is backward compatible for read-only access, so root pools should be bootable, but exercise caution. MFC after: 4 weeks Notes: svn path=/head/; revision=354941
* MFV r354378,r354379,r354386: 10499 Multi-modifier protection (MMP)Andriy Gapon2019-11-189-7/+160
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 10499 Multi-modifier protection (MMP) illumos/illumos-gate@e0f1c0afa46cc84d4b1e40124032a9a87310386e https://github.com/illumos/illumos-gate/commit/e0f1c0afa46cc84d4b1e40124032a9a87310386e https://www.illumos.org/issues/10499 Port the following ZFS commits from ZoL to illumos. 379ca9cf2 Multi-modifier protection (MMP) bbffb59ef Fix multihost stale cache file import 0d398b256 Do not initiate MMP writes while pool is suspended 10701 Correct lock ASSERTs in vdev_label_read/write illumos/illumos-gate@58447f688d5e308373ab16a3b129bc0ba0fbc154 https://github.com/illumos/illumos-gate/commit/58447f688d5e308373ab16a3b129bc0ba0fbc154 https://www.illumos.org/issues/10701 Port of ZoL commit: 0091d66f4e Correct lock ASSERTs in vdev_label_read/write At a minimum, this fixes a blown assert during an MMP test run when running on a DEBUG build. 11770 additional mmp fixes illumos/illumos-gate@4348eb901228d2f8fa50bb132a34248e8662074e https://github.com/illumos/illumos-gate/commit/4348eb901228d2f8fa50bb132a34248e8662074e https://www.illumos.org/issues/11770 Port a few additional MMP fixes from ZoL that came in after our initial MMP port. 4ca457b065 ZTS: Fix mmp_interval failure ca95f70dff zpool import progress kstat (only minimal changes from above can be pulled in right now) 060f0226e6 MMP interval and fail_intervals in uberblock Note from the committer (me). I do not have any use for this feature and I have not tested it. I only did smoke testing with multihost=off. Please be aware. I merged the code only to make future merges easier. Portions contributed by: Jerry Jelinek <jerry.jelinek@joyent.com> Portions contributed by: Tim Chase <tim@chase2k.com> Portions contributed by: sanjeevbagewadi <sanjeev.bagewadi@gmail.com> Portions contributed by: John L. Hammond <john.hammond@intel.com> Portions contributed by: Giuseppe Di Natale <dinatale2@llnl.gov> Portions contributed by: Prakash Surya <surya1@llnl.gov> Portions contributed by: Brian Behlendorf <behlendorf1@llnl.gov> Author: Olaf Faaland <faaland1@llnl.gov> MFC after: 4 weeks Notes: svn path=/head/; revision=354804
* zfs: enable SPA_PROCESS on the kernel sideAndriy Gapon2019-11-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The purpose of this change is to group kernelthreads specific to a particular ZFS pool under a kernel process. There can be many dozens of threads per pool. This change improves observability of those threads. This change consists of several subchanges: 1. illumos taskq_create_proc can now pass its process parameter to taskqueue. Also, use zfsproc instead of NULL for taskq_create. Caveat: zfsproc might not be initialized yet. But in that case it is still NULL, so not worse than before. 2. illumos sys/proc.h: kthread id is stored in t_did field, not t_tid. 3. zfs: enable SPA_PROCESS on the kernel side. The change is a bit hairy as newproc() is implemented privately to spa.c. I couldn't think of a better way to populate process name than to poke inside the argument for the process routine. 4. illumos thread_create: allow assigning thread to process other than zfsproc. 5. zfs: expose spa_proc to other users, assign sync and quiesce threads to it. Pool-specific threads created using (relatively new) zthr mechanism are still assigned to the zfskern process rather than to a respective zpool-xxx process. I am going to address this a bit later. Reviewed by: no one MFC after: 5 weeks Relnotes: perhaps Differential Revision: https://reviews.freebsd.org/D9720 Notes: svn path=/head/; revision=354333
* MFV r353637: 10844 Serialize ZTHR operations to eliminate racesAndriy Gapon2019-10-162-20/+4
| | | | | | | | | | | | | | | | illumos/illumos-gate@6a316e1f6d32750bb8fcf2558dcb17b90ca580fd https://github.com/illumos/illumos-gate/commit/6a316e1f6d32750bb8fcf2558dcb17b90ca580fd https://www.illumos.org/issues/10844 ZoL 61c3391acc9 Serialize ZTHR operations to eliminate races Portions contributed by: Jerry Jelinek <jerry.jelinek@joyent.com> Author: Serapheim Dimitropoulos <serapheim@delphix.com> Obtained from: illumos, ZoL MFC after: 3 weeks Notes: svn path=/head/; revision=353638
* MFV r348596: 9689 zfs range lock code should not be zpl-specificAndriy Gapon2019-10-163-45/+48
| | | | | | | | | | | | | | illumos/illumos-gate@7931524763ef94dc16989451dddd206563d03bb4 FreeBSD note: some tweaking was needed to avoid a conflict with sys/rangelock.h. Author: Matthew Ahrens <mahrens@delphix.com> Obtained from: illumos MFC after: 3 weeks Notes: svn path=/head/; revision=353634
* MFV r353619: 9691 fat zap should prefetch when iteratingAndriy Gapon2019-10-161-1/+4
| | | | | | | | | | | | | | | | | | | | | | illumos/illumos-gate@52abb70e073c2a88808c0d66fd810ba8c5080572 https://github.com/illumos/illumos-gate/commit/52abb70e073c2a88808c0d66fd810ba8c5080572 https://www.illumos.org/issues/9691 When iterating over a ZAP object, we're almost always certain to iterate over the entire object. If there are multiple leaf blocks, we can realize a performance win by issuing reads for all the leaf blocks in parallel when the iteration begins. For example, if we have 10,000 snapshots, "zfs destroy -nv pool/fs@1%9999" can take 30 minutes when the cache is cold. This change provides a >3x performance improvement, by issuing the reads for all ~64 blocks of each ZAP object in parallel. Author: Matthew Ahrens <mahrens@delphix.com> Obtained from: illumos MFC after: 2 weeks Notes: svn path=/head/; revision=353621
* MFV r353617: 9425 allow channel programs to be stopped via signalsAndriy Gapon2019-10-163-0/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | illumos/illumos-gate@d0cb1fb92629bc0283c88d4719df7285c1612700 https://github.com/illumos/illumos-gate/commit/d0cb1fb92629bc0283c88d4719df7285c1612700 https://www.illumos.org/issues/9425 Problem Statement ZFS Channel program scripts currently require a timeout, so that hung or long-running scripts return a timeout error instead of causing ZFS to get wedged. This limit can currently be set up to 100 million Lua instructions. Even with a limit in place, it would be desirable to have a sys admin (support engineer) be able to cancel a script that is taking a long time. Proposed Solution Make it possible to abort a channel program by sending an interrupt signal.In the underlying txg_wait_sync function, switch the cv_wait to a cv_wait_sig to catch the signal. Once a signal is encountered, the dsl_sync_task function can install a Lua hook that will get called before the Lua interpreter executes a new line of code. The dsl_sync_task can resume with a standard txg_wait_sync call and wait for the txg to complete. Meanwhile, the hook will abort the script and indicate that the channel program was canceled. The kernel returns a EINTR to indicate that the channel program run was canceled. FreeBSD note: the return value of cv_wait_sig() has inverted meaning between us and illumos. Author: Don Brady <don.brady@delphix.com> Obtained from: illumos MFC after: 4 weeks Notes: svn path=/head/; revision=353618
* MFC r353611: 10330 merge recent ZoL vdev and metaslab changesAndriy Gapon2019-10-162-3/+2
| | | | | | | | | | | | | | | | | | | illumos/illumos-gate@a0b03b161c4df3cfc54fbc741db09b3bdc23ffba https://github.com/illumos/illumos-gate/commit/a0b03b161c4df3cfc54fbc741db09b3bdc23ffba https://www.illumos.org/issues/10330 3 recent ZoL changes in the vdev and metaslab code which we can pull over: PR 8324 c853f382db 8324 Change target size of metaslabs from 256GB to 16GB PR 8290 b194fab0fb 8290 Factor metaslab_load_wait() in metaslab_load() PR 8286 419ba59145 8286 Update vdev_is_spacemap_addressable() for new spacemap encoding Author: Serapheim Dimitropoulos <serapheimd@gmail.com> Obtained from: illumos, ZoL MFC after: 2 weeks Notes: svn path=/head/; revision=353612
* fix up r353565, somehow a few files did not get committedAndriy Gapon2019-10-151-1/+1
| | | | | | | | MFC after: 3 weeks X-MFC with: r353565 Notes: svn path=/head/; revision=353568
* MFV r353561: 10343 ZoL: Prefix all refcount functions with zfs_Andriy Gapon2019-10-1511-56/+61
| | | | | | | | | | | | | | | | | | | | | illumos/illumos-gate@e914ace2e9d9bf2dbf9a1f1ce81cb776022096f5 https://github.com/illumos/illumos-gate/commit/e914ace2e9d9bf2dbf9a1f1ce81cb776022096f5 https://www.illumos.org/issues/10343 On the openzfs feature/porting matrix, this is listed as: prefix to refcount funcs/types Having these changes will make it easier to share other work across the different ZFS operating systems. PR 7963 424fd7c3e Prefix all refcount functions with zfs_ PR 7885 & 7932 c13060e47 Linux 4.19-rc3+ compat: Remove refcount_t compat PR 5823 & 5842 4859fe796 Linux 4.11 compat: avoid refcount_t name conflict Author: Tim Schumacher <timschumi@gmx.de> Obtained from: illumos, ZoL MFC after: 3 weeks Notes: svn path=/head/; revision=353565
* MFV r353558: 10572 10579 Fix race in dnode_check_slots_free()Andriy Gapon2019-10-152-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | illumos/illumos-gate@aa02ea01948372a32cbf08bfc31c72c32e3fc81e https://github.com/illumos/illumos-gate/commit/aa02ea01948372a32cbf08bfc31c72c32e3fc81e 10572 Fix race in dnode_check_slots_free() https://www.illumos.org/issues/10572 The Fix from ZoL: Currently, dnode_check_slots_free() works by checking dn->dn_type in the dnode to determine if the dnode is reclaimable. However, there is a small window of time between dnode_free_sync() in the first call to dsl_dataset_sync() and when the useraccounting code is run when the type is set DMU_OT_NONE, but the dnode is not yet evictable, leading to crashes. This patch adds the ability for dnodes to track which txg they were last dirtied in and adds a check for this before performing the reclaim. This patch also corrects several instances when dn_dirty_link was treated as a list_node_t when it is technically a multilist_node_t. 10579 Don't allow dnode allocation if dn_holds != 0 https://www.illumos.org/issues/10579 The fix from ZoL: This patch simply fixes a small bug where dnode_hold_impl() could attempt to allocate a dnode that was in the process of being freed, but which still had active references. This patch simply adds the required check. Author: Tom Caputi <tcaputi@datto.com> Reported by: delphij MFC after: 2 weeks X-MFC with: r353176 Notes: svn path=/head/; revision=353559
* MFV r350898, r351075: 8423 8199 7432 Implement large_dnode pool featureAndriy Gapon2019-10-078-28/+212
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8423 8199 7432 Implement large_dnode pool feature 7432 Large dnode pool feature 8199 multi-threaded dmu_object_alloc() 8423 Implement large_dnode pool feature 10406 large_dnode changes broke zfs recv of legacy stream llumos/illumos-gate@54811da5ac6b517992fdc173df5d605e4e61fdc0 https://github.com/illumos/illumos-gate/commit/54811da5ac6b517992fdc173df5d605e4e61fdc0 https://www.illumos.org/issues/8423 https://www.illumos.org/issues/8199 https://www.illumos.org/issues/7432 illumos/illumos-gate@811964cd9f1fbae0fc3b93d116269e9b1fca090a https://github.com/illumos/illumos-gate/commit/811964cd9f1fbae0fc3b93d116269e9b1fca090a https://www.illumos.org/issues/10406 ZoL issues: Improved dnode allocation #6564 Clean up large dnode code #6262 Fix dnode_hold() freeing dnode behavior #8172 Fix dnode allocation race #6414, #6439 Partial: Raw sends must be able to decrease nlevels #6821, #6864 Remove unnecessary txg syncs from receive_object() Closes #7197 This updates FreeBSD large_dnode code (that was imported from ZoL) to a version that was committed to illumos. It has some cleanups, improvements and fixes comparing to what we have in FreeBSD now. I think that the most significant update is 8199 multi-threaded dmu_object_alloc(). This commit reverts r351077 that was a revert of r351074 and r351076 and restores those changes. Required atomic operations should be available now on all platforms where we build ZFS. Obtained from: illumos MFC after: 3 weeks Notes: svn path=/head/; revision=353176
* zfs: add root vnode cachingMateusz Guzik2019-10-061-2/+0
| | | | | | | | | | | This replaces the approach added in r338927. See r353150. Sponsored by: The FreeBSD Foundation Notes: svn path=/head/; revision=353151
* ZFS: add bookmark renamingAndriy Gapon2019-10-031-0/+1
| | | | | | | | | | | | | | | | | | | | | The feature is implemented as an extension of the existing ZFS_IOC_RENAME ioctl. Both the userland and the DSL interfaces support renaming only a single bookmark at a time. As of now, there is no ZCP interface to the new functionality. I am going to add it once the DSL interface passes a test of time. This change picks up support for zfs_ioc_namecheck_t::ENTITY_NAME that was added to ZoL as part of Redacted Send/Receive feature by Paul Dagnelie <pcd@delphix.com>. This is needed to allow a bookmark name in zc_name. Discussed with: mahrens Reviewed by: bcr (man page) Sponsored by: CyberSecure Differential Revision: https://reviews.freebsd.org/D21795 Notes: svn path=/head/; revision=353037
* Revert r351076 and r351074 because of atomic_swap_64 on 32-bit platformsAndriy Gapon2019-08-158-212/+28
| | | | | | | Trying to sort it out. Notes: svn path=/head/; revision=351077
* MFV r350898: 8423 8199 7432 Implement large_dnode pool featureAndriy Gapon2019-08-158-28/+212
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8423 8199 7432 Implement large_dnode pool feature 8423 Implement large_dnode pool feature 8199 multi-threaded dmu_object_alloc() 7432 Large dnode pool feature llumos/illumos-gate@54811da5ac6b517992fdc173df5d605e4e61fdc0 https://github.com/illumos/illumos-gate/commit/54811da5ac6b517992fdc173df5d605e4e61fdc0 https://www.illumos.org/issues/8423 https://www.illumos.org/issues/8199 https://www.illumos.org/issues/7432 ZoL issues: Improved dnode allocation #6564 Clean up large dnode code #6262 Fix dnode_hold() freeing dnode behavior #8172 Fix dnode allocation race #6414, #6439 Partial: Raw sends must be able to decrease nlevels #6821, #6864 Remove unnecessary txg syncs from receive_object() Closes #7197 This updates FreeBSD large_dnode code (that was imported from ZoL) to a version that was committed to illumos. It has some cleanups, improvements and fixes comparing to what we have in FreeBSD now. I think that the most significant update is 8199 multi-threaded dmu_object_alloc(). Obtained from: illumos MFC after: 3 weeks Notes: svn path=/head/; revision=351074
* Avoid extra taskq_dispatch() calls by DMU.Alexander Motin2019-06-251-0/+2
| | | | | | | | | | | | | DMU sync code calls taskq_dispatch() for each sublist of os_dirty_dnodes and os_synced_dnodes. Since the number of sublists by default is equal to number of CPUs, it will dispatch equal, potentially large, number of tasks, waking up many CPUs to handle them, even if only one or few of sublists actually have any work to do. This change adds check for empty sublists to avoid this. Notes: svn path=/head/; revision=349381
* Properly align struct multilist_sublist to cache line.Alexander Motin2019-06-141-4/+3
| | | | | | | | | | Manual Illumos alignment does not fit us due to different kmutex_t size. MFC after: 1 week Sponsored by: iXsystems, Inc. Notes: svn path=/head/; revision=349035
* MFV r348578: 9962 zil_commit should omit cache thrashAlexander Motin2019-06-031-6/+8
| | | | | | | | | | | | | | illumos/illumos-gate@cab3a55e158118937e07d059c46f1bc14d1f254d Reviewed by: Matt Ahrens <matt@delphix.com> Reviewed by: Brad Lewis <brad.lewis@delphix.com> Reviewed by: Patrick Mooney <patrick.mooney@joyent.com> Reviewed by: Jerry Jelinek <jerry.jelinek@joyent.com> Approved by: Joshua M. Clulow <josh@sysmgr.org> Author: Prakash Surya <prakash.surya@delphix.com> Notes: svn path=/head/; revision=348579
* MFV r348553: 9681 ztest failure in spa_history_log_internal due to spa_rename()Alexander Motin2019-06-031-1/+0
| | | | | | | | | | | | illumos/illumos-gate@6aee0ad76969eb0027131b3a338f2d94ae86f728 Reviewed by: Prakash Surya <prakash.surya@delphix.com> Reviewed by: Serapheim Dimitropoulos <serapheim.dimitro@delphix.com> Approved by: Robert Mustacchi <rm@joyent.com> Author: Matthew Ahrens <mahrens@delphix.com> Notes: svn path=/head/; revision=348565
* MFV r348551: 9862 fix typo in comment in vdev_impl.hAlexander Motin2019-06-031-1/+1
| | | | | | | | | | | | illumos/illumos-gate@84927f52bd837f6e4882a19e43fd026f1828d910 Reviewed by: Matthew Ahrens <mahrens@delphix.com> Reviewed by: Brian Behlendorf <behlendorf1@llnl.gov> Approved by: Robert Mustacchi <rm@joyent.com> Author: Allan Jude <allanjude@freebsd.org> Notes: svn path=/head/; revision=348563
* MFV r348548: 9617 too-frequent TXG sync causes excessive write inflationAlexander Motin2019-06-031-1/+1
| | | | | | | | | | | | | | illumos/illumos-gate@7928f4baf4ab3230557eb6289be68aa7a3003f38 Reviewed by: Serapheim Dimitropoulos <serapheim.dimitro@delphix.com> Reviewed by: Brad Lewis <brad.lewis@delphix.com> Reviewed by: George Wilson <george.wilson@delphix.com> Reviewed by: Andrew Stormont <andyjstormont@gmail.com> Approved by: Robert Mustacchi <rm@joyent.com> Author: Matthew Ahrens <mahrens@delphix.com> Notes: svn path=/head/; revision=348561
* Fix minor mismerges.Alexander Motin2019-04-261-8/+1
| | | | | | | | | No functional change. MFC after: 1 week Notes: svn path=/head/; revision=346760
* MFV r336930: 9284 arc_reclaim_thread has 2 jobsAlexander Motin2019-03-151-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `arc_reclaim_thread()` calls `arc_adjust()` after calling `arc_kmem_reap_now()`; `arc_adjust()` signals `arc_get_data_buf()` to indicate that we may no longer be `arc_is_overflowing()`. The problem is, `arc_kmem_reap_now()` can take several seconds to complete, has no impact on `arc_is_overflowing()`, but due to how the code is structured, can impact how long the ARC will remain in the `arc_is_overflowing()` state. The fix is to use seperate threads to: 1. keep `arc_size` under `arc_c`, by calling `arc_adjust()`, which improves `arc_is_overflowing()` 2. keep enough free memory in the system, by calling `arc_kmem_reap_now()` plus `arc_shrink()`, which improves `arc_available_memory()`. illumos/illumos-gate@de753e34f9c399037936e8bc547d823bba9d4b0d Reviewed by: Matt Ahrens <mahrens@delphix.com> Reviewed by: Serapheim Dimitropoulos <serapheim@delphix.com> Reviewed by: Pavel Zakharov <pavel.zakharov@delphix.com> Reviewed by: Dan Kimmel <dan.kimmel@delphix.com> Reviewed by: Paul Dagnelie <pcd@delphix.com> Reviewed by: Dan McDonald <danmcd@joyent.com> Reviewed by: Tim Kordas <tim.kordas@joyent.com> Approved by: Garrett D'Amore <garrett@damore.org> Author: Brad Lewis <brad.lewis@delphix.com> Notes: svn path=/head/; revision=345200
* MFV/ZoL: Disable LBA weighting on files and SSDsAlexander Motin2019-03-081-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | The LBA weighting makes sense on rotational media where the outer tracks have twice the bandwidth of the inner tracks. However, it is detrimental on nonrotational media such as solid state disks, where the only effect is to ensure that metaslabs enter the best-fit allocation behavior sooner, which is detrimental to performance. It also makes no sense on files where the underlying filesystem can arrange things however it wants. Author: Richard Yao <ryao@gentoo.org> Signed-off-by: Richard Yao <ryao@gentoo.org> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #3712 zfsonlinux/zfs@fb40095f5f0853946f8150481ca22602d1334dfe To reduce code divergence this merge replaces equivalent but different FreeBSD code detecting non-rotating medium vdevs. MFC after: 1 month Notes: svn path=/head/; revision=344936
* zfs: depessimize zfs_root with rmlocksMateusz Guzik2018-09-251-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently vfs calls the root method on each absolute lookup and when crossing mount points. zfs_root ends up looking up the inode internally as if it was not instantianted which results in significant lock contention on systems like EPYC. Store the vnode in the mount point and protect the access with rmlocks. This is a temporary hack for 12.0. Sample result: before: make -s -j 128 buildkernel 2778.09s user 3319.45s system 8370% cpu 1:12.85 total after: make -s -j 128 buildkernel 3199.57s user 1772.78s system 8232% cpu 1:00.40 total Tested by: pho (zfs mount/unmount tests) Reviewed by: kib, mav, sef (different parts) Approved by: re (gjb) Differential Revision: https://reviews.freebsd.org/D17233 Notes: svn path=/head/; revision=338927
* Remove {max/min}_offset() macros, use vm_map_{max/min}() inlines.Konstantin Belousov2018-08-291-7/+0
| | | | | | | | | | | | | | | Exposing max_offset and min_offset defines in public headers is causing clashes with variable names, for example when building QEMU. Based on the submission by: royger Reviewed by: alc, markj (previous version) Sponsored by: The FreeBSD Foundation (kib) MFC after: 1 week Approved by: re (marius) Differential revision: https://reviews.freebsd.org/D16881 Notes: svn path=/head/; revision=338370
* Make dnode definition uniform on !x86Matt Macy2018-08-211-6/+0
| | | | | | | gcc4 requires -fms-extensions to accept anonymous union members Notes: svn path=/head/; revision=338128
* fix build DN_MAX_BONUSLEN -> DN_OLD_MAX_BONUSLENMatt Macy2018-08-121-1/+1
| | | | Notes: svn path=/head/; revision=337675
* Restore legacy dnode_phys layout on tier 2 archesMatt Macy2018-08-121-9/+38
| | | | | | | Evidently gcc4 doesn't support anonymous union members Notes: svn path=/head/; revision=337674
* MFV/ZoL: add dbuf statsMatt Macy2018-08-123-0/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NB: disabled pending the addition of KSTAT_TYPE_RAW support to the SPL commit e0b0ca983d6897bcddf05af2c0e5d01ff66f90db Author: Brian Behlendorf <behlendorf1@llnl.gov> Date: Wed Oct 2 17:11:19 2013 -0700 Add visibility in to cached dbufs Currently there is no mechanism to inspect which dbufs are being cached by the system. There are some coarse counters in arcstats by they only give a rough idea of what's being cached. This patch aims to improve the current situation by adding a new dbufs kstat. When read this new kstat will walk all cached dbufs linked in to the dbuf_hash. For each dbuf it will dump detailed information about the buffer. It will also dump additional information about the referenced arc buffer and its related dnode. This provides a more complete view in to exactly what is being cached. With this generic infrastructure in place utilities can be written to post-process the data to understand exactly how the caching is working. For example, the data could be processed to show a list of all cached dnodes and how much space they're consuming. Or a similar list could be generated based on dnode type. Many other ways to interpret the data exist based on what kinds of questions you're trying to answer. Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Prakash Surya <surya1@llnl.gov> Notes: svn path=/head/; revision=337670
* MFV/ZoL: Implement large_dnode pool featureMatt Macy2018-08-1210-18/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 50c957f702ea6d08a634e42f73e8a49931dd8055 Author: Ned Bass <bass6@llnl.gov> Date: Wed Mar 16 18:25:34 2016 -0700 Implement large_dnode pool feature Justification ------------- This feature adds support for variable length dnodes. Our motivation is to eliminate the overhead associated with using spill blocks. Spill blocks are used to store system attribute data (i.e. file metadata) that does not fit in the dnode's bonus buffer. By allowing a larger bonus buffer area the use of a spill block can be avoided. Spill blocks potentially incur an additional read I/O for every dnode in a dnode block. As a worst case example, reading 32 dnodes from a 16k dnode block and all of the spill blocks could issue 33 separate reads. Now suppose those dnodes have size 1024 and therefore don't need spill blocks. Then the worst case number of blocks read is reduced to from 33 to two--one per dnode block. In practice spill blocks may tend to be co-located on disk with the dnode blocks so the reduction in I/O would not be this drastic. In a badly fragmented pool, however, the improvement could be significant. ZFS-on-Linux systems that make heavy use of extended attributes would benefit from this feature. In particular, ZFS-on-Linux supports the xattr=sa dataset property which allows file extended attribute data to be stored in the dnode bonus buffer as an alternative to the traditional directory-based format. Workloads such as SELinux and the Lustre distributed filesystem often store enough xattr data to force spill bocks when xattr=sa is in effect. Large dnodes may therefore provide a performance benefit to such systems. Other use cases that may benefit from this feature include files with large ACLs and symbolic links with long target names. Furthermore, this feature may be desirable on other platforms in case future applications or features are developed that could make use of a larger bonus buffer area. Implementation -------------- The size of a dnode may be a multiple of 512 bytes up to the size of a dnode block (currently 16384 bytes). A dn_extra_slots field was added to the current on-disk dnode_phys_t structure to describe the size of the physical dnode on disk. The 8 bits for this field were taken from the zero filled dn_pad2 field. The field represents how many "extra" dnode_phys_t slots a dnode consumes in its dnode block. This convention results in a value of 0 for 512 byte dnodes which preserves on-disk format compatibility with older software. Similarly, the in-memory dnode_t structure has a new dn_num_slots field to represent the total number of dnode_phys_t slots consumed on disk. Thus dn->dn_num_slots is 1 greater than the corresponding dnp->dn_extra_slots. This difference in convention was adopted because, unlike on-disk structures, backward compatibility is not a concern for in-memory objects, so we used a more natural way to represent size for a dnode_t. The default size for newly created dnodes is determined by the value of a new "dnodesize" dataset property. By default the property is set to "legacy" which is compatible with older software. Setting the property to "auto" will allow the filesystem to choose the most suitable dnode size. Currently this just sets the default dnode size to 1k, but future code improvements could dynamically choose a size based on observed workload patterns. Dnodes of varying sizes can coexist within the same dataset and even within the same dnode block. For example, to enable automatically-sized dnodes, run # zfs set dnodesize=auto tank/fish The user can also specify literal values for the dnodesize property. These are currently limited to powers of two from 1k to 16k. The power-of-2 limitation is only for simplicity of the user interface. Internally the implementation can handle any multiple of 512 up to 16k, and consumers of the DMU API can specify any legal dnode value. The size of a new dnode is determined at object allocation time and stored as a new field in the znode in-memory structure. New DMU interfaces are added to allow the consumer to specify the dnode size that a newly allocated object should use. Existing interfaces are unchanged to avoid having to update every call site and to preserve compatibility with external consumers such as Lustre. The new interfaces names are given below. The versions of these functions that don't take a dnodesize parameter now just call the _dnsize() versions with a dnodesize of 0, which means use the legacy dnode size. New DMU interfaces: dmu_object_alloc_dnsize() dmu_object_claim_dnsize() dmu_object_reclaim_dnsize() New ZAP interfaces: zap_create_dnsize() zap_create_norm_dnsize() zap_create_flags_dnsize() zap_create_claim_norm_dnsize() zap_create_link_dnsize() The constant DN_MAX_BONUSLEN is renamed to DN_OLD_MAX_BONUSLEN. The spa_maxdnodesize() function should be used to determine the maximum bonus length for a pool. These are a few noteworthy changes to key functions: * The prototype for dnode_hold_impl() now takes a "slots" parameter. When the DNODE_MUST_BE_FREE flag is set, this parameter is used to ensure the hole at the specified object offset is large enough to hold the dnode being created. The slots parameter is also used to ensure a dnode does not span multiple dnode blocks. In both of these cases, if a failure occurs, ENOSPC is returned. Keep in mind, these failure cases are only possible when using DNODE_MUST_BE_FREE. If the DNODE_MUST_BE_ALLOCATED flag is set, "slots" must be 0. dnode_hold_impl() will check if the requested dnode is already consumed as an extra dnode slot by an large dnode, in which case it returns ENOENT. * The function dmu_object_alloc() advances to the next dnode block if dnode_hold_impl() returns an error for a requested object. This is because the beginning of the next dnode block is the only location it can safely assume to either be a hole or a valid starting point for a dnode. * dnode_next_offset_level() and other functions that iterate through dnode blocks may no longer use a simple array indexing scheme. These now use the current dnode's dn_num_slots field to advance to the next dnode in the block. This is to ensure we properly skip the current dnode's bonus area and don't interpret it as a valid dnode. zdb --- The zdb command was updated to display a dnode's size under the "dnsize" column when the object is dumped. For ZIL create log records, zdb will now display the slot count for the object. ztest ----- Ztest chooses a random dnodesize for every newly created object. The random distribution is more heavily weighted toward small dnodes to better simulate real-world datasets. Unused bonus buffer space is filled with non-zero values computed from the object number, dataset id, offset, and generation number. This helps ensure that the dnode traversal code properly skips the interior regions of large dnodes, and that these interior regions are not overwritten by data belonging to other dnodes. A new test visits each object in a dataset. It verifies that the actual dnode size matches what was stored in the ztest block tag when it was created. It also verifies that the unused bonus buffer space is filled with the expected data patterns. ZFS Test Suite -------------- Added six new large dnode-specific tests, and integrated the dnodesize property into existing tests for zfs allow and send/recv. Send/Receive ------------ ZFS send streams for datasets containing large dnodes cannot be received on pools that don't support the large_dnode feature. A send stream with large dnodes sets a DMU_BACKUP_FEATURE_LARGE_DNODE flag which will be unrecognized by an incompatible receiving pool so that the zfs receive will fail gracefully. While not implemented here, it may be possible to generate a backward-compatible send stream from a dataset containing large dnodes. The implementation may be tricky, however, because the send object record for a large dnode would need to be resized to a 512 byte dnode, possibly kicking in a spill block in the process. This means we would need to construct a new SA layout and possibly register it in the SA layout object. The SA layout is normally just sent as an ordinary object record. But if we are constructing new layouts while generating the send stream we'd have to build the SA layout object dynamically and send it at the end of the stream. For sending and receiving between pools that do support large dnodes, the drr_object send record type is extended with a new field to store the dnode slot count. This field was repurposed from unused padding in the structure. ZIL Replay ---------- The dnode slot count is stored in the uppermost 8 bits of the lr_foid field. The bits were unused as the object id is currently capped at 48 bits. Resizing Dnodes --------------- It should be possible to resize a dnode when it is dirtied if the current dnodesize dataset property differs from the dnode's size, but this functionality is not currently implemented. Clearly a dnode can only grow if there are sufficient contiguous unused slots in the dnode block, but it should always be possible to shrink a dnode. Growing dnodes may be useful to reduce fragmentation in a pool with many spill blocks in use. Shrinking dnodes may be useful to allow sending a dataset to a pool that doesn't support the large_dnode feature. Feature Reference Counting -------------------------- The reference count for the large_dnode pool feature tracks the number of datasets that have ever contained a dnode of size larger than 512 bytes. The first time a large dnode is created in a dataset the dataset is converted to an extensible dataset. This is a one-way operation and the only way to decrement the feature count is to destroy the dataset, even if the dataset no longer contains any large dnodes. The complexity of reference counting on a per-dnode basis was too high, so we chose to track it on a per-dataset basis similarly to the large_block feature. Signed-off-by: Ned Bass <bass6@llnl.gov> Signed-off-by: Brian Behlendorf <behlendorf1@llnl.gov> Closes #3542 Notes: svn path=/head/; revision=337669