aboutsummaryrefslogtreecommitdiff
path: root/sys/amd64/sgx
diff options
context:
space:
mode:
authorMark Johnston <markj@FreeBSD.org>2019-09-09 21:32:42 +0000
committerMark Johnston <markj@FreeBSD.org>2019-09-09 21:32:42 +0000
commitfee2a2fa39834d8d5eaa981298fce9d2ed31546d (patch)
tree290b84257a055cb0fbd4eb498ca16690ad749aa3 /sys/amd64/sgx
parent58a11be1cf32a2ad832d7167bd41819cb105851e (diff)
downloadsrc-fee2a2fa39834d8d5eaa981298fce9d2ed31546d.tar.gz
src-fee2a2fa39834d8d5eaa981298fce9d2ed31546d.zip
Change synchonization rules for vm_page reference counting.
There are several mechanisms by which a vm_page reference is held, preventing the page from being freed back to the page allocator. In particular, holding the page's object lock is sufficient to prevent the page from being freed; holding the busy lock or a wiring is sufficent as well. These references are protected by the page lock, which must therefore be acquired for many per-page operations. This results in false sharing since the page locks are external to the vm_page structures themselves and each lock protects multiple structures. Transition to using an atomically updated per-page reference counter. The object's reference is counted using a flag bit in the counter. A second flag bit is used to atomically block new references via pmap_extract_and_hold() while removing managed mappings of a page. Thus, the reference count of a page is guaranteed not to increase if the page is unbusied, unmapped, and the object's write lock is held. As a consequence of this, the page lock no longer protects a page's identity; operations which move pages between objects are now synchronized solely by the objects' locks. The vm_page_wire() and vm_page_unwire() KPIs are changed. The former requires that either the object lock or the busy lock is held. The latter no longer has a return value and may free the page if it releases the last reference to that page. vm_page_unwire_noq() behaves the same as before; the caller is responsible for checking its return value and freeing or enqueuing the page as appropriate. vm_page_wire_mapped() is introduced for use in pmap_extract_and_hold(). It fails if the page is concurrently being unmapped, typically triggering a fallback to the fault handler. vm_page_wire() no longer requires the page lock and vm_page_unwire() now internally acquires the page lock when releasing the last wiring of a page (since the page lock still protects a page's queue state). In particular, synchronization details are no longer leaked into the caller. The change excises the page lock from several frequently executed code paths. In particular, vm_object_terminate() no longer bounces between page locks as it releases an object's pages, and direct I/O and sendfile(SF_NOCACHE) completions no longer require the page lock. In these latter cases we now get linear scalability in the common scenario where different threads are operating on different files. __FreeBSD_version is bumped. The DRM ports have been updated to accomodate the KPI changes. Reviewed by: jeff (earlier version) Tested by: gallatin (earlier version), pho Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D20486
Notes
Notes: svn path=/head/; revision=352110
Diffstat (limited to 'sys/amd64/sgx')
-rw-r--r--sys/amd64/sgx/sgx.c2
1 files changed, 0 insertions, 2 deletions
diff --git a/sys/amd64/sgx/sgx.c b/sys/amd64/sgx/sgx.c
index 3d45b60de3ef..ea18c9674234 100644
--- a/sys/amd64/sgx/sgx.c
+++ b/sys/amd64/sgx/sgx.c
@@ -357,9 +357,7 @@ sgx_page_remove(struct sgx_softc *sc, vm_page_t p)
vm_paddr_t pa;
uint64_t offs;
- vm_page_lock(p);
(void)vm_page_remove(p);
- vm_page_unlock(p);
dprintf("%s: p->pidx %ld\n", __func__, p->pindex);