aboutsummaryrefslogtreecommitdiff
path: root/sys/vm/uma_core.c
Commit message (Collapse)AuthorAgeFilesLines
* Eliminate a stale debug message. The per-CPU cache locks were replacedAlan Cox2014-12-311-3/+0
| | | | | | | | | | | by critical sections in r145686. PR: 193254 Submitted by: luke.tw@gmail.com MFC after: 3 days Notes: svn path=/head/; revision=276474
* Provide mutual exclusion between zone allocation/destruction andKonstantin Belousov2014-11-301-2/+31
| | | | | | | | | | | | | | | | | | | | | | uma_reclaim(). Reclamation code must not see half-constructed or destructed zones. Do this by bracing uma_zcreate() and uma_zdestroy() into a shared-locked sx, and take the sx exclusively in uma_reclaim(). Usually zones are not created/destroyed during the system operation, but tmpfs mounts do cause zone operations and exposed the bug. Another solution could be to only expose a new keg on uma_kegs list after the corresponding zone is fully constructed, and similar treatment for the destruction. But it probably requires more risky code rearrangement as well. Reported and tested by: pho Discussed with: avg Sponsored by: The FreeBSD Foundation MFC after: 2 weeks Notes: svn path=/head/; revision=275347
* This is the much-discussed major upgrade to the random(4) device, known to ↵Mark Murray2014-10-301-0/+35
| | | | | | | | | | | | | | | | | | | | | | you all as /dev/random. This code has had an extensive rewrite and a good series of reviews, both by the author and other parties. This means a lot of code has been simplified. Pluggable structures for high-rate entropy generators are available, and it is most definitely not the case that /dev/random can be driven by only a hardware souce any more. This has been designed out of the device. Hardware sources are stirred into the CSPRNG (Yarrow, Fortuna) like any other entropy source. Pluggable modules may be written by third parties for additional sources. The harvesting structures and consequently the locking have been simplified. Entropy harvesting is done in a more general way (the documentation for this will follow). There is some GREAT entropy to be had in the UMA allocator, but it is disabled for now as messing with that is likely to annoy many people. The venerable (but effective) Yarrow algorithm, which is no longer supported by its authors now has an alternative, Fortuna. For now, Yarrow is retained as the default algorithm, but this may be changed using a kernel option. It is intended to make Fortuna the default algorithm for 11.0. Interested parties are encouraged to read ISBN 978-0-470-47424-2 "Cryptography Engineering" By Ferguson, Schneier and Kohno for Fortuna's gory details. Heck, read it anyway. Many thanks to Arthur Mesh who did early grunt work, and who got caught in the crossfire rather more than he deserved to. My thanks also to folks who helped me thresh this out on whiteboards and in the odd "Hallway track", or otherwise. My Nomex pants are on. Let the feedback commence! Reviewed by: trasz,des(partial),imp(partial?),rwatson(partial?) Approved by: so(des) Notes: svn path=/head/; revision=273872
* Change the UMA mutex into a rwlockBryan Venteicher2014-10-051-20/+20
| | | | | | | | | | | | | Acquire the lock in read mode when just needed to ensure the stability of the keg list. The UMA lock may be held for a long time (relatively speaking) in uma_reclaim() on machines with lots of zones/kegs. If the uma_timeout() would fire during that period, subsequent callouts on that CPU may be significantly delayed. Reviewed by: jhb Notes: svn path=/head/; revision=272573
* Remove stray uma_mtx lock/unlock in zone_drain_wait()Bryan Venteicher2014-10-051-2/+0
| | | | | | | | | | | | | Callers of zone_drain_wait(M_WAITOK) do not need to hold (and were not) the uma_mtx, but we would attempt to unlock and relock the mutex if we had to sleep because the zone was already draining. The M_NOWAIT callers may hold the uma_mtx, but we do not sleep in that case. Reviewed by: jhb MFC after: 3 days Notes: svn path=/head/; revision=272550
* Pull in r267961 and r267973 again. Fix for issues reported will follow.Hans Petter Selasky2014-06-281-2/+1
| | | | Notes: svn path=/head/; revision=267992
* Revert r267961, r267973:Glen Barber2014-06-271-1/+2
| | | | | | | | | | | | | These changes prevent sysctl(8) from returning proper output, such as: 1) no output from sysctl(8) 2) erroneously returning ENOMEM with tools like truss(1) or uname(1) truss: can not get etype: Cannot allocate memory Notes: svn path=/head/; revision=267985
* Extend the meaning of the CTLFLAG_TUN flag to automatically check ifHans Petter Selasky2014-06-271-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | there is an environment variable which shall initialize the SYSCTL during early boot. This works for all SYSCTL types both statically and dynamically created ones, except for the SYSCTL NODE type and SYSCTLs which belong to VNETs. A new flag, CTLFLAG_NOFETCH, has been added to be used in the case a tunable sysctl has a custom initialisation function allowing the sysctl to still be marked as a tunable. The kernel SYSCTL API is mostly the same, with a few exceptions for some special operations like iterating childrens of a static/extern SYSCTL node. This operation should probably be made into a factored out common macro, hence some device drivers use this. The reason for changing the SYSCTL API was the need for a SYSCTL parent OID pointer and not only the SYSCTL parent OID list pointer in order to quickly generate the sysctl path. The motivation behind this patch is to avoid parameter loading cludges inside the OFED driver subsystem. Instead of adding special code to the OFED driver subsystem to post-load tunables into dynamically created sysctls, we generalize this in the kernel. Other changes: - Corrected a possibly incorrect sysctl name from "hw.cbb.intr_mask" to "hw.pcic.intr_mask". - Removed redundant TUNABLE statements throughout the kernel. - Some minor code rewrites in connection to removing not needed TUNABLE statements. - Added a missing SYSCTL_DECL(). - Wrapped two very long lines. - Avoid malloc()/free() inside sysctl string handling, in case it is called to initialize a sysctl from a tunable, hence malloc()/free() is not ready when sysctls from the sysctl dataset are registered. - Bumped FreeBSD version to indicate SYSCTL API change. MFC after: 2 weeks Sponsored by: Mellanox Technologies Notes: svn path=/head/; revision=267961
* - Modify vm_page_unwire() and vm_page_enqueue() to directly acceptAttilio Rao2014-06-161-1/+1
| | | | | | | | | | | | | | | | | | | | | the queue where to enqueue pages that are going to be unwired. - Add stronger checks to the enqueue/dequeue for the pagequeues when adding and removing pages to them. Of course, for unmanaged pages the queue parameter of vm_page_unwire() will be ignored, just as the active parameter today. This makes adding new pagequeues quicker. This change effectively modifies the KPI. __FreeBSD_version will be, however, bumped just when the full cache of free pages will be evicted. Sponsored by: EMC / Isilon storage division Reviewed by: alc Tested by: pho Notes: svn path=/head/; revision=267548
* Introduce new "256 Bucket" zone to split requests and reduce congestionAlexander Motin2014-06-121-1/+2
| | | | | | | | | | on "128 Bucket" zone lock. MFC after: 2 weeks Sponsored by: iXsystems, Inc. Notes: svn path=/head/; revision=267391
* Allocating new bucket for bucket zone, never take it from the zone itself,Alexander Motin2014-06-121-0/+2
| | | | | | | | | | | | | | since it will almost certanly fail. Take next bigger zone instead. This situation should not happen with original bucket zones configuration: "32 Bucket" zone uses "64 Bucket" and vice versa. But if "64 Bucket" zone lock is congested, zone may grow its bucket size and start biting itself. MFC after: 2 weeks Sponsored by: iXsystems, Inc. Notes: svn path=/head/; revision=267387
* After r251709, avoid a clang 3.4 warning about an unused static constDimitry Andric2014-02-141-4/+1
| | | | | | | | | | variable (uma_max_ipers), when asserts are disabled. Reviewed by: glebius MFC after: 3 days Notes: svn path=/head/; revision=261896
* Make M_ZERO flag work correctly on UMA_ZONE_PCPU zones.Gleb Smirnoff2014-02-101-2/+14
| | | | | | | Sponsored by: Nginx, Inc. Notes: svn path=/head/; revision=261723
* Provide macros that allow easily export uma(9) zone limits andGleb Smirnoff2014-02-071-0/+29
| | | | | | | | | | | | | | current usage via sysctl(9): SYSCTL_UMA_MAX() SYSCTL_ADD_UMA_MAX() SYSCTL_UMA_CUR() SYSCTL_ADD_UMA_CUR() Sponsored by: Nginx, Inc. Notes: svn path=/head/; revision=261593
* In keg_dtor(), print out the keg name in the "Freed UMA keg was not empty"Craig Rodrigues2013-11-291-1/+2
| | | | | | | | | | message printed to the console. This makes it easier to track down the source of certain memory leaks. Suggested by: adrian Notes: svn path=/head/; revision=258737
* - Add bucket size column to `show uma` DDB command.Alexander Motin2013-11-281-5/+34
| | | | | | | - Add `show umacache` command to show alike stats for cache-only UMA zones. Notes: svn path=/head/; revision=258716
* Make UMA to not blindly force offpage slab header allocation for largeAlexander Motin2013-11-271-2/+16
| | | | | | | | | | | | | | (> PAGE_SIZE) zones. If zone is not multiple to PAGE_SIZE, there may be enough space for the header at the last page, so we may avoid extra header memory allocation and hash table update/lookup. ZFS creates bunch of odd-sized UMA zones (5120, 6144, 7168, 10240, 14336). This change gives good use to at least some of otherwise lost memory there. Reviewed by: avg Notes: svn path=/head/; revision=258693
* Don't count bucket allocation failures for UMA zones as their own failures.Alexander Motin2013-11-271-5/+3
| | | | | | | | | | There are good reasons for this to happen, such as recursion prevention, etc. and they are not fatal since buckets are just an optimization mechanism. Real bucket allocation failures are any way counted by the bucket zones themselves, and we don't need double accounting there. Notes: svn path=/head/; revision=258691
* Fix bug introduced at r252226, when udata argument passed to bucket_alloc()Alexander Motin2013-11-271-3/+4
| | | | | | | | | | was used without making sure first that it was really passed for us. On some of my systems this bug made user argument passed by ZFS code to uma_zalloc_arg() unexpectedly block UMA per-CPU caches for those zones. Notes: svn path=/head/; revision=258690
* When purging per-CPU UMA caches do not return empty buckets into the globalAlexander Motin2013-11-231-4/+16
| | | | | | | | full bucket cache to not trigger assertion if allocation happen before that global cache get purged. Notes: svn path=/head/; revision=258497
* Implement mechanism to safely but slowly purge UMA per-CPU caches.Alexander Motin2013-11-191-0/+77
| | | | | | | | | This is a last resort for very low memory condition in case other measures to free memory were ineffective. Sequentially cycle through all CPUs and extract per-CPU cache buckets into zone cache from where they can be freed. Notes: svn path=/head/; revision=258340
* Grow UMA zone bucket size also on lock congestion during item free.Alexander Motin2013-11-191-2/+13
| | | | | | | | | Lock congestion is the same, whether it happens on alloc or free, so handle it equally. Now that we have back pressure, there is no problem to grow buckets a bit faster. Any way growth is much slower then in 9.x. Notes: svn path=/head/; revision=258338
* Add two new UMA bucket zones to store 3 and 9 items per bucket.Alexander Motin2013-11-191-0/+2
| | | | | | | | | | | | | These new buckets make bucket size self-tuning more soft and precise. Without them there are buckets for 1, 5, 13, 29, ... items. While at bigger sizes difference about 2x is fine, at smallest ones it is 5x and 2.6x respectively. New buckets make that line look like 1, 3, 5, 9, 13, 29, reducing jumps between steps, making algorithm work softer, allocating and freeing memory in better fitting chunks. Otherwise there is quite a big gap between allocating 128K and 5x128K of RAM at once. Notes: svn path=/head/; revision=258337
* Implement soft pressure on UMA cache bucket sizes.Alexander Motin2013-11-191-0/+9
| | | | | | | | | | | | | | Every time system detects low memory condition decrease bucket sizes for each zone by one item. As result, higher memory pressure will push to smaller bucket sizes and so smaller per-CPU caches and so more efficient memory use. Before this change there was no force to oppose buckets growth as result of practically inevitable zone lock conflicts, and after some run time per-CPU caches could consume enough RAM to kill the system. Notes: svn path=/head/; revision=258336
* Fix bug introduced in rewrite of keg_free_slab in -r251894.Kirk McKusick2013-08-311-1/+1
| | | | | | | | | | | | | | | | | | The consequence of the bug is that fini calls are not done when a slab is freed by a call-back from the page daemon. It went unnoticed for two months because fini is little used. I spotted the bug while reading the code to learn how it works so I could write it up for the next edition of the Design and Implementation of FreeBSD book. No MFC needed as this code exists only in HEAD. Reviewed by: kib, jeff Tested by: pho Notes: svn path=/head/; revision=255097
* Different consumers of the struct vm_page abuse pageq member to keepKonstantin Belousov2013-08-101-13/+1
| | | | | | | | | | | | | | | | | | | | additional information, when the page is guaranteed to not belong to a paging queue. Usually, this results in a lot of type casts which make reasoning about the code correctness harder. Sometimes m->object is used instead of pageq, which could cause real and confusing bugs if non-NULL m->object is leaked. See r141955 and r253140 for examples. Change the pageq member into a union containing explicitly-typed members. Use them instead of type-punning or abusing m->object in x86 pmaps, uma and vm_page_alloc_contig(). Requested and reviewed by: alc Sponsored by: The FreeBSD Foundation Notes: svn path=/head/; revision=254182
* Replace kernel virtual address space allocation with vmem. This providesJeff Roberson2013-08-071-6/+6
| | | | | | | | | | | | | | | | transparent layering and better fragmentation. - Normalize functions that allocate memory to use kmem_* - Those that allocate address space are named kva_* - Those that operate on maps are named kmap_* - Implement recursive allocation handling for kmem_arena in vmem. Reviewed by: alc Tested by: pho Sponsored by: EMC / Isilon Storage Division Notes: svn path=/head/; revision=254025
* Revert r249590 and in case if mp_ncpus isn't initialized use MAXCPU. ThisGleb Smirnoff2013-07-231-2/+3
| | | | | | | | | | allows us to init counter zone at early stage of boot. Reviewed by: kib Tested by: Lytochkin Boris <lytboris gmail.com> Notes: svn path=/head/; revision=253565
* Remove a spurious keg lock acquisition.Davide Italiano2013-06-281-1/+1
| | | | Notes: svn path=/head/; revision=252358
* - Resolve bucket recursion issues by passing a cookie with zone flagsJeff Roberson2013-06-261-41/+76
| | | | | | | | | | | | | | through bucket_alloc() to uma_zalloc_arg() and uma_zfree_arg(). - Make some smaller buckets for large zones to further reduce memory waste. - Implement uma_zone_reserve(). This holds aside a number of items only for callers who specify M_USE_RESERVE. buckets will never be filled from reserve allocations. Sponsored by: EMC / Isilon Storage Division Notes: svn path=/head/; revision=252226
* - Add a per-zone lock for zones without kegs.Jeff Roberson2013-06-201-96/+76
| | | | | | | | | | | | | | | - Be more explicit about zone vs keg locking. This functionally changes almost nothing. - Add a size parameter to uma_zcache_create() so we can size the buckets. - Pass the zone to bucket_alloc() so it can modify allocation flags as appropriate. - Fix a bug in zone_alloc_bucket() where I missed an address of operator in a failure case. (Found by pho) Sponsored by: EMC / Isilon Storage Division Notes: svn path=/head/; revision=252040
* - Persist the caller's flags in the bucket allocation flags so we don'tJeff Roberson2013-06-191-1/+1
| | | | | | | | | lose a M_NOVM when we recurse into a bucket allocation. Sponsored by: EMC / Isilon Storage Division Notes: svn path=/head/; revision=251983
* Refine UMA bucket allocation to reduce space consumption and improveJeff Roberson2013-06-181-302/+261
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | performance. - Always free to the alloc bucket if there is space. This gives LIFO allocation order to improve hot-cache performance. This also allows for zones with a single bucket per-cpu rather than a pair if the entire working set fits in one bucket. - Enable per-cpu caches of buckets. To prevent recursive bucket allocation one bucket zone still has per-cpu caches disabled. - Pick the initial bucket size based on a table driven maximum size per-bucket rather than the number of items per-page. This gives more sane initial sizes. - Only grow the bucket size when we face contention on the zone lock, this causes bucket sizes to grow more slowly. - Adjust the number of items per-bucket to account for the header space. This packs the buckets more efficiently per-page while making them not quite powers of two. - Eliminate the per-zone free bucket list. Always return buckets back to the bucket zone. This ensures that as zones grow into larger bucket sizes they eventually discard the smaller sizes. It persists fewer buckets in the system. The locking is slightly trickier. - Only switch buckets in zalloc, not zfree, this eliminates pathological cases where we ping-pong between two buckets. - Ensure that the thread that fills a new bucket gets to allocate from it to give a better upper bound on allocation time. Sponsored by: EMC / Isilon Storage Division Notes: svn path=/head/; revision=251894
* - Add a new UMA API: uma_zcache_create(). This makes a zone without anyJeff Roberson2013-06-171-212/+256
| | | | | | | | | | | | | | | | | backing memory that is only a container for per-cpu caches of arbitrary pointer items. These zones have no kegs. - Convert the regular keg based allocator to use the new import/release functions. - Move some stats to be atomics since they would require excessive zone locking/unlocking with the new import/release paradigm. Make zone_free_item simpler now that callers can manage more stats. - Check for these cache-only zones in the public APIs and debugging code by checking zone_first_keg() against NULL. Sponsored by: EMC / Isilong Storage Division Notes: svn path=/head/; revision=251826
* - Convert the slab free item list from a linked array of indices to aJeff Roberson2013-06-131-192/+98
| | | | | | | | | | | | | | | | bitmap using sys/bitset. This is much simpler, has lower space overhead and is cheaper in most cases. - Use a second bitmap for invariants asserts and improve the quality of the asserts as well as the number of erroneous conditions that we will catch. - Drastically simplify sizing code. Special case refcnt zones since they will be going away. - Update stale comments. Sponsored by: EMC / Isilon Storage Division Notes: svn path=/head/; revision=251709
* Panic if UMA_ZONE_PCPU is created at early stages of boot, when mp_ncpusGleb Smirnoff2013-04-221-0/+1
| | | | | | | | | isn't yet initialized. Otherwise we will panic at first allocation later. Sponsored by: Nginx, Inc. Notes: svn path=/head/; revision=249763
* Convert UMA code to C99 uintXX_t types.Gleb Smirnoff2013-04-091-30/+30
| | | | Notes: svn path=/head/; revision=249313
* Fix KASSERTs: maximum number of items per slab is 256.Gleb Smirnoff2013-04-091-3/+3
| | | | Notes: svn path=/head/; revision=249305
* Merge from projects/counters: UMA_ZONE_PCPU zones.Gleb Smirnoff2013-04-081-32/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | These zones have slab size == sizeof(struct pcpu), but request from VM enough pages to fit (uk_slabsize * mp_ncpus). An item allocated from such zone would have a separate twin for each CPU in the system, and these twins are at a distance of sizeof(struct pcpu) from each other. This magic value of distance would allow us to make some optimizations later. To address private item from a CPU simple arithmetics should be used: item = (type *)((char *)base + sizeof(struct pcpu) * curcpu) These arithmetics are available as zpcpu_get() macro in pcpu.h. To introduce non-page size slabs a new field had been added to uma_keg uk_slabsize. This shifted some frequently used fields of uma_keg to the fourth cache line on amd64. To mitigate this pessimization, uma_keg fields were a bit rearranged and least frequently used uk_name and uk_link moved down to the fourth cache line. All other fields, that are dereferenced frequently fit into first three cache lines. Sponsored by: Nginx, Inc. Notes: svn path=/head/; revision=249264
* Switch the vm_object mutex to be a rwlock. This will enable in theAttilio Rao2013-03-091-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | future further optimizations where the vm_object lock will be held in read mode most of the time the page cache resident pool of pages are accessed for reading purposes. The change is mostly mechanical but few notes are reported: * The KPI changes as follow: - VM_OBJECT_LOCK() -> VM_OBJECT_WLOCK() - VM_OBJECT_TRYLOCK() -> VM_OBJECT_TRYWLOCK() - VM_OBJECT_UNLOCK() -> VM_OBJECT_WUNLOCK() - VM_OBJECT_LOCK_ASSERT(MA_OWNED) -> VM_OBJECT_ASSERT_WLOCKED() (in order to avoid visibility of implementation details) - The read-mode operations are added: VM_OBJECT_RLOCK(), VM_OBJECT_TRYRLOCK(), VM_OBJECT_RUNLOCK(), VM_OBJECT_ASSERT_RLOCKED(), VM_OBJECT_ASSERT_LOCKED() * The vm/vm_pager.h namespace pollution avoidance (forcing requiring sys/mutex.h in consumers directly to cater its inlining functions using VM_OBJECT_LOCK()) imposes that all the vm/vm_pager.h consumers now must include also sys/rwlock.h. * zfs requires a quite convoluted fix to include FreeBSD rwlocks into the compat layer because the name clash between FreeBSD and solaris versions must be avoided. At this purpose zfs redefines the vm_object locking functions directly, isolating the FreeBSD components in specific compat stubs. The KPI results heavilly broken by this commit. Thirdy part ports must be updated accordingly (I can think off-hand of VirtualBox, for example). Sponsored by: EMC / Isilon storage division Reviewed by: jeff Reviewed by: pjd (ZFS specific review) Discussed with: alc Tested by: pho Notes: svn path=/head/; revision=248084
* Merge from vmc-playground branch:Attilio Rao2013-02-261-48/+56
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace the sub-optimal uma_zone_set_obj() primitive with more modern uma_zone_reserve_kva(). The new primitive reserves before hand the necessary KVA space to cater the zone allocations and allocates pages with ALLOC_NOOBJ. More specifically: - uma_zone_reserve_kva() does not need an object to cater the backend allocator. - uma_zone_reserve_kva() can cater M_WAITOK requests, in order to serve zones which need to do uma_prealloc() too. - When possible, uma_zone_reserve_kva() uses directly the direct-mapping by uma_small_alloc() rather than relying on the KVA / offset combination. The removal of the object attribute allows 2 further changes: 1) _vm_object_allocate() becomes static within vm_object.c 2) VM_OBJECT_LOCK_INIT() is removed. This function is replaced by direct calls to mtx_init() as there is no need to export it anymore and the calls aren't either homogeneous anymore: there are now small differences between arguments passed to mtx_init(). Sponsored by: EMC / Isilon storage division Reviewed by: alc (which also offered almost all the comments) Tested by: pho, jhb, davide Notes: svn path=/head/; revision=247360
* Fix typo in debug printf.Gleb Smirnoff2013-01-291-1/+1
| | | | Notes: svn path=/head/; revision=246087
* Implemented uma_zone_set_warning(9) function that sets a warning, whichPawel Jakub Dawidek2012-12-071-1/+33
| | | | | | | | | | | | | | | will be printed once the given zone becomes full and cannot allocate an item. The warning will not be printed more often than every five minutes. All UMA warnings can be globally turned off by setting sysctl/tunable vm.zone_warnings to 0. Discussed on: arch Obtained from: WHEEL Systems MFC after: 2 weeks Notes: svn path=/head/; revision=243998
* Const-ify the zone name argument to uma_zcreate(9).Matthew D Fleming2012-10-261-2/+2
| | | | | | | MFC after: 3 days Notes: svn path=/head/; revision=242152
* Print flags as hex instead of an integer.Eitan Adler2012-10-221-3/+3
| | | | | | | | | | | PR: kern/168210 Submitted by: linimon Reviewed by: alc Approved by: cperciva MFC after: 3 days Notes: svn path=/head/; revision=241825
* If caller specifies UMA_ZONE_OFFPAGE explicitly, then do not waste memoryGleb Smirnoff2012-09-181-1/+3
| | | | | | | | | in an allocation for a slab. Reviewed by: jeff Notes: svn path=/head/; revision=240676
* Fix function name in keg_cachespread_init() assert.Gleb Smirnoff2012-08-261-1/+1
| | | | Notes: svn path=/head/; revision=239710
* Add missing sleep stat increaseEitan Adler2012-07-071-0/+1
| | | | | | | | | | | PR: kern/168211 Submitted by: linimon Reviewed by: alc Approved by: cperciva MFC after: 3 days Notes: svn path=/head/; revision=238206
* Honor db_pager_quit in 'show uma' and 'show malloc'.John Baldwin2012-07-021-0/+2
| | | | | | | MFC after: 1 month Notes: svn path=/head/; revision=238000
* Tweak condition for disabling allocation from per-CPU buckets inMaksim Yevmenkin2012-05-231-4/+1
| | | | | | | | | | | | | low memory situation. I've observed a situation where per-CPU allocations were disabled while there were enough free cached pages. Basically, cnt.v_free_count was sitting stable at a value lower than cnt.v_free_min and that caused massive performance drop. Reviewed by: alc MFC after: 1 week Notes: svn path=/head/; revision=235854