aboutsummaryrefslogtreecommitdiff
path: root/sys/kern/uipc_shm.c
Commit message (Collapse)AuthorAgeFilesLines
* Constify vm_pager-related virtual tables.Konstantin Belousov2021-05-221-1/+1
| | | | (cherry picked from commit d474440ab33c683b0e3f55e8e854f055615db6ec)
* Split out cwd/root/jail, cmask state from filedesc tableConrad Meyer2020-11-171-3/+3
| | | | | | | | | | | | | | | | No functional change intended. Tracking these structures separately for each proc enables future work to correctly emulate clone(2) in linux(4). __FreeBSD_version is bumped (to 1300130) for consumption by, e.g., lsof. Reviewed by: kib Discussed with: markj, mjg Differential Revision: https://reviews.freebsd.org/D27037 Notes: svn path=/head/; revision=367777
* Add a vmparam.h constant indicating pmap support for large pages.Mark Johnston2020-09-231-3/+1
| | | | | | | | | | | Enable SHM_LARGEPAGE support on arm64. Reviewed by: alc, kib Sponsored by: Juniper Networks, Inc., Klara, Inc. Differential Revision: https://reviews.freebsd.org/D26467 Notes: svn path=/head/; revision=366090
* vm_ooffset_t is now unsignedEric van Gyzen2020-09-181-1/+1
| | | | | | | | | | | | | vm_ooffset_t is now unsigned. Remove some tests for negative values, or make other adjustments accordingly. Reported by: Coverity Reviewed by: kib markj Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D26214 Notes: svn path=/head/; revision=365886
* Fix interaction between largepages and seals/writes.Konstantin Belousov2020-09-101-9/+16
| | | | | | | | | | | | | | | | | | | On write with SHM_GROW_ON_WRITE, use proper truncate. Do not allow to grow largepage shm if F_SEAL_GROW is set. Note that shrinks are not supported at all due to unmanaged mappings. Call to vm_pager_update_writecount() is only valid for swap objects, skip it for unmanaged largepages. Largepages cannot support write sealing. Do not writecnt largepage mappings. Reported by: kevans Reviewed by: kevans, markj Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D26394 Notes: svn path=/head/; revision=365613
* Support for userspace non-transparent superpages (largepages).Konstantin Belousov2020-09-091-18/+405
| | | | | | | | | | | | | | | | | | | Created with shm_open2(SHM_LARGEPAGE) and then configured with FIOSSHMLPGCNF ioctl, largepages posix shared memory objects guarantee that all userspace mappings of it are served by superpage non-managed mappings. Only amd64 for now, both 2M and 1G superpages can be requested, the later requires CPU feature. Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D24652 Notes: svn path=/head/; revision=365522
* uipc_shm.c: Move comment where it belongs.Konstantin Belousov2020-09-091-5/+5
| | | | | | | | | | | Reviewed by: markj Tested by: pho Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D24652 Notes: svn path=/head/; revision=365510
* kern: clean up empty lines in .c and .h filesMateusz Guzik2020-09-011-2/+2
| | | | Notes: svn path=/head/; revision=365222
* posixshm: fix setting of shm_flagsKyle Evans2020-08-311-1/+4
| | | | | | | | | | | | | | | | | Noted in D24652, we currently set shmfd->shm_flags on every shm_open()/shm_open2(). This wasn't properly thought out; one shouldn't be able to specify incompatible flags on subsequent opens of non-anon shm. Move setting of shm_flags explicitly to the two places shmfd are created, as we do with seals, and validate when we're opening a pre-existing mapping that we've either passed no flags or we've passed the exact same flags as the first time. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D26242 Notes: svn path=/head/; revision=364990
* vfs: remove the obsolete privused argument from vaccessMateusz Guzik2020-08-051-3/+3
| | | | | | | | This brings argument count down to 6, which is passable without the stack on amd64. Notes: svn path=/head/; revision=363893
* shm_open2: Implement SHM_GROW_ON_WRITEKyle Evans2020-07-101-5/+32
| | | | | | | | | | | | | Lack of SHM_GROW_ON_WRITE is actively breaking Python's memfd_create tests, so go ahead and implement it. A future change will make memfd_create always set SHM_GROW_ON_WRITE, to match Linux behavior and unbreak Python's tests on -CURRENT. Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D25502 Notes: svn path=/head/; revision=363065
* Call swap_pager_freespace() from vm_object_page_remove().Mark Johnston2020-06-251-5/+0
| | | | | | | | | | | | | | | | | All vm_object_page_remove() callers, except linux_invalidate_mapping_pages() in the LinuxKPI, free swap space when removing a range of pages from an object. The LinuxKPI case appears to be an unintentional omission that could result in leaked swap blocks, so unconditionally free swap space in vm_object_page_remove() to protect against similar bugs in the future. Reviewed by: alc, kib Tested by: pho Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D25329 Notes: svn path=/head/; revision=362613
* posixshm: fix counting of writable mappingsKyle Evans2020-04-141-7/+9
| | | | | | | | | | | | | | | | | | | Similar to mmap'ing vnodes, posixshm should count any mapping where maxprot contains VM_PROT_WRITE (i.e. fd opened r/w with no write-seal applied) as writable and thus blocking of any write-seal. The memfd tests have been amended to reflect the fixes here, which notably includes: 1. Fix for error return bug; EPERM is not a documented failure mode for mmap 2. Fix rejection of write-seal with active mappings that can be upgraded via mprotect(2). Reported by: markj Discussed with: markj, kib Notes: svn path=/head/; revision=359918
* Relax restrictions on private mappings of POSIX shm objects.Mark Johnston2020-04-131-14/+19
| | | | | | | | | | | | | | | | | When creating a private mapping of a POSIX shared memory object, VM_PROT_WRITE should always be included in maxprot regardless of permissions on the underlying FD. Otherwise it is possible to open a shm object read-only, map it with MAP_PRIVATE and PROT_WRITE, and violate the invariant in vm_map_insert() that (prot & maxprot) == prot. Reported by: syzkaller Reviewed by: kevans, kib MFC after: 1 week Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D24398 Notes: svn path=/head/; revision=359892
* Fix the malloc type used in sys_shm_unlink() after r354808.Mark Johnston2020-03-031-1/+1
| | | | | | | | PR: 244563 Reported by: swills Notes: svn path=/head/; revision=358563
* Use unlocked grab for uipc_shm/tmpfs.Jeff Roberson2020-02-281-5/+9
| | | | | | | | Reviewed by: markj Differential Revision: https://reviews.freebsd.org/D23865 Notes: svn path=/head/; revision=358446
* Don't hold the object lock while calling getpages.Jeff Roberson2020-01-191-0/+4
| | | | | | | | | | | | | The vnode pager does not want the object lock held. Moving this out allows further object lock scope reduction in callers. While here add some missing paging in progress calls and an assert. The object handle is now protected explicitly with pip. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D23033 Notes: svn path=/head/; revision=356902
* shmfd: posix_fallocate(2): only take rangelock for section we needKyle Evans2020-01-091-1/+11
| | | | | | | | | | | | | | | Other mechanisms that resize the shmfd grab a write lock from 0 to OFF_MAX for safety, so we still get proper synchronization of shmfd->shm_size in effect. There's no need to block readers/writers of earlier segments when we're just reserving more space, so narrow the scope -- it would likely be safe to narrow it completely to just the section of the range that extends beyond our current size, but this likely isn't worth it since the size isn't stable until the writelock is granted the first time. Suggested by: cem (passing comment) Notes: svn path=/head/; revision=356537
* posixshm: implement posix_fallocate(2)Kyle Evans2020-01-081-0/+28
| | | | | | | | | | | | | Linux expects to be able to use posix_fallocate(2) on a memfd. Other places would use this with shm_open(2) to act as a smarter ftruncate(2). Test has been added to go along with this. Reviewed by: kib (earlier version) Differential Revision: https://reviews.freebsd.org/D23042 Notes: svn path=/head/; revision=356512
* shm: correct KPI mistake introduced around memfd_createKyle Evans2020-01-051-16/+16
| | | | | | | | | | | | | | | | | When file sealing and shm_open2 were introduced, we should have grown a new kern_shm_open2 helper that did the brunt of the work with the new interface while kern_shm_open remains the same. Instead, more complexity was introduced to kern_shm_open to handle the additional features and consumers had to keep changing in somewhat awkward ways, and a kern_shm_open2 was added to wrap kern_shm_open. Backpedal on this and correct the situation- kern_shm_open returns to the interface it had prior to file sealing being introduced, and neither function needs an initial_seals argument anymore as it's handled in kern_shm_open2 based on the shmflags. Notes: svn path=/head/; revision=356372
* shmfd/mmap: restrict maxprot with MAP_SHARED + F_SEAL_WRITEKyle Evans2020-01-051-1/+7
| | | | | | | | | | | | | | If a write seal is set on a shared mapping, we must exclude VM_PROT_WRITE as the fd is effectively read-only. This was discovered by running devel/linux-ltp, which mmap's with acceptable protections specified then attempts to raise to PROT_READ|PROT_WRITE with mprotect(2), which we allowed. Reviewed by: kib Differential Revision: https://reviews.freebsd.org/D22978 Notes: svn path=/head/; revision=356371
* Remove page locking for queue operations.Mark Johnston2019-12-281-2/+0
| | | | | | | | | | | | | | | | With the previous reviews, the page lock is no longer required in order to perform queue operations on a page. It is also no longer needed in the page queue scans. This change effectively eliminates remaining uses of the page lock and also the false sharing caused by multiple pages sharing a page lock. Reviewed by: jeff Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D22885 Notes: svn path=/head/; revision=356157
* Fix a mistake in r355765. We need to activate the page if it is not yetJeff Roberson2019-12-151-1/+3
| | | | | | | | | on a pagequeue. Reported by: pho Notes: svn path=/head/; revision=355771
* Add a deferred free mechanism for freeing swap space that does not requireJeff Roberson2019-12-151-10/+6
| | | | | | | | | | | | | | | | | | | | | | an exclusive object lock. Previously swap space was freed on a best effort basis when a page that had valid swap was dirtied, thus invalidating the swap copy. This may be done inconsistently and requires the object lock which is not always convenient. Instead, track when swap space is present. The first dirty is responsible for deleting space or setting PGA_SWAP_FREE which will trigger background scans to free the swap space. Simplify the locking in vm_fault_dirty() now that we can reliably identify the first dirty. Discussed with: alc, kib, markj Differential Revision: https://reviews.freebsd.org/D22654 Notes: svn path=/head/; revision=355765
* Simplify anonymous memory handling with an OBJ_ANON flag. This eliminatesJeff Roberson2019-11-191-5/+0
| | | | | | | | | | | | | | | reudundant complicated checks and additional locking required only for anonymous memory. Introduce vm_object_allocate_anon() to create these objects. DEFAULT and SWAP objects now have the correct settings for non-anonymous consumers and so individual consumers need not modify the default flags to create super-pages and avoid ONEMAPPING/NOSPLIT. Reviewed by: alc, dougm, kib, markj Tested by: pho Differential Revision: https://reviews.freebsd.org/D22119 Notes: svn path=/head/; revision=354869
* Jail and capability mode for shm_rename; add audit support for shm_renameDavid Bright2019-11-181-65/+72
| | | | | | | | | | | | | | | | | | | | | | Co-mingling two things here: * Addressing some feedback from Konstantin and Kyle re: jail, capability mode, and a few other things * Adding audit support as promised. The audit support change includes a partial refresh of OpenBSM from upstream, where the change to add shm_rename has already been accepted. Matthew doesn't plan to work on refreshing anything else to support audit for those new event types. Submitted by: Matthew Bryan <matthew.bryan@isilon.com> Reviewed by: kib Relnotes: Yes Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D22083 Notes: svn path=/head/; revision=354808
* (4/6) Protect page valid with the busy lock.Jeff Roberson2019-10-151-2/+2
| | | | | | | | | | | | | | Atomics are used for page busy and valid state when the shared busy is held. The details of the locking protocol and valid and dirty synchronization are in the updated vm_page.h comments. Reviewed by: kib, markj Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D21594 Notes: svn path=/head/; revision=353539
* (1/6) Replace busy checks with acquires where it is trival to do so.Jeff Roberson2019-10-151-4/+3
| | | | | | | | | | | | | | This is the first in a series of patches that promotes the page busy field to a first class lock that no longer requires the object lock for consistency. Reviewed by: kib, markj Tested by: pho Sponsored by: Netflix, Intel Differential Revision: https://reviews.freebsd.org/D21548 Notes: svn path=/head/; revision=353535
* shm_open2(2): completely unbreakKyle Evans2019-10-021-1/+1
| | | | | | | | | | | | | | | | | kern_shm_open2(), since conception, completely fails to pass the mode along to kern_shm_open(). This breaks most uses of it. Add tests alongside this that actually check the mode of the returned files. PR: 240934 [pulseaudio breakage] Reported by: ler, Andrew Gierth [postgres breakage] Diagnosed by: Andrew Gierth (great catch) Tested by: ler, tmunro Pointy hat to: kevans Notes: svn path=/head/; revision=352952
* Add an shm_rename syscallDavid Bright2019-09-261-2/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | Add an atomic shm rename operation, similar in spirit to a file rename. Atomically unlink an shm from a source path and link it to a destination path. If an existing shm is linked at the destination path, unlink it as part of the same atomic operation. The caller needs the same permissions as shm_unlink to the shm being renamed, and the same permissions for the shm at the destination which is being unlinked, if it exists. If those fail, EACCES is returned, as with the other shm_* syscalls. truss support is included; audit support will come later. This commit includes only the implementation; the sysent-generated bits will come in a follow-on commit. Submitted by: Matthew Bryan <matthew.bryan@isilon.com> Reviewed by: jilles (earlier revision) Reviewed by: brueffer (manpages, earlier revision) Relnotes: yes Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D21423 Notes: svn path=/head/; revision=352747
* sysent: regenerate after r352705Kyle Evans2019-09-251-1/+3
| | | | | | | | This also implements it, fixes kdump, and removes no longer needed bits from lib/libc/sys/shm_open.c for the interim. Notes: svn path=/head/; revision=352706
* Add a shm_open2 syscall to support upcoming memfd_createKyle Evans2019-09-251-0/+33
| | | | | | | | | | | | | | | | | | shm_open2 allows a little more flexibility than the original shm_open. shm_open2 doesn't enforce CLOEXEC on its callers, and it has a separate shmflag argument that can be expanded later. Currently the only shmflag is to allow file sealing on the returned fd. shm_open and memfd_create will both be implemented in libc to use this new syscall. __FreeBSD_version is bumped to indicate the presence. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D21393 Notes: svn path=/head/; revision=352700
* [2/3] Add an initial seal argument to kern_shm_open()Kyle Evans2019-09-251-5/+58
| | | | | | | | | | | | | | | | | | | | | Now that flags may be set on posixshm, add an argument to kern_shm_open() for the initial seals. To maintain past behavior where callers of shm_open(2) are guaranteed to not have any seals applied to the fd they're given, apply F_SEAL_SEAL for existing callers of kern_shm_open. A special flag could be opened later for shm_open(2) to indicate that sealing should be allowed. We currently restrict initial seals to F_SEAL_SEAL. We cannot error out if F_SEAL_SEAL is re-applied, as this would easily break shm_open() twice to a shmfd that already existed. A note's been added about the assumptions we've made here as a hint towards anyone wanting to allow other seals to be applied at creation. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D21392 Notes: svn path=/head/; revision=352697
* [1/3] Add mostly Linux-compatible file sealing supportKyle Evans2019-09-251-20/+108
| | | | | | | | | | | | | File sealing applies protections against certain actions (currently: write, growth, shrink) at the inode level. New fileops are added to accommodate seals - EINVAL is returned by fcntl(2) if they are not implemented. Reviewed by: markj, kib Differential Revision: https://reviews.freebsd.org/D21391 Notes: svn path=/head/; revision=352695
* Replace redundant code with a few new vm_page_grab facilities:Jeff Roberson2019-09-101-18/+7
| | | | | | | | | | | | | | - VM_ALLOC_NOCREAT will grab without creating a page. - vm_page_grab_valid() will grab and page in if necessary. - vm_page_busy_acquire() automates some busy acquire loops. Discussed with: alc, kib, markj Tested by: pho (part of larger branch) Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D21546 Notes: svn path=/head/; revision=352176
* Change synchonization rules for vm_page reference counting.Mark Johnston2019-09-091-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are several mechanisms by which a vm_page reference is held, preventing the page from being freed back to the page allocator. In particular, holding the page's object lock is sufficient to prevent the page from being freed; holding the busy lock or a wiring is sufficent as well. These references are protected by the page lock, which must therefore be acquired for many per-page operations. This results in false sharing since the page locks are external to the vm_page structures themselves and each lock protects multiple structures. Transition to using an atomically updated per-page reference counter. The object's reference is counted using a flag bit in the counter. A second flag bit is used to atomically block new references via pmap_extract_and_hold() while removing managed mappings of a page. Thus, the reference count of a page is guaranteed not to increase if the page is unbusied, unmapped, and the object's write lock is held. As a consequence of this, the page lock no longer protects a page's identity; operations which move pages between objects are now synchronized solely by the objects' locks. The vm_page_wire() and vm_page_unwire() KPIs are changed. The former requires that either the object lock or the busy lock is held. The latter no longer has a return value and may free the page if it releases the last reference to that page. vm_page_unwire_noq() behaves the same as before; the caller is responsible for checking its return value and freeing or enqueuing the page as appropriate. vm_page_wire_mapped() is introduced for use in pmap_extract_and_hold(). It fails if the page is concurrently being unmapped, typically triggering a fallback to the fault handler. vm_page_wire() no longer requires the page lock and vm_page_unwire() now internally acquires the page lock when releasing the last wiring of a page (since the page lock still protects a page's queue state). In particular, synchronization details are no longer leaked into the caller. The change excises the page lock from several frequently executed code paths. In particular, vm_object_terminate() no longer bounces between page locks as it releases an object's pages, and direct I/O and sendfile(SF_NOCACHE) completions no longer require the page lock. In these latter cases we now get linear scalability in the common scenario where different threads are operating on different files. __FreeBSD_version is bumped. The DRM ports have been updated to accomodate the KPI changes. Reviewed by: jeff (earlier version) Tested by: gallatin (earlier version), pho Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D20486 Notes: svn path=/head/; revision=352110
* posixshm: start counting writeable mappingsKyle Evans2019-09-031-5/+12
| | | | | | | | | | | | | | | | r351650 switched posixshm to using OBJT_SWAP for shm_object r351795 added support to the swap_pager for tracking writeable mappings Take advantage of this and start tracking writeable mappings; fd sealing will use this to reject a seal on writing with EBUSY if any such mapping exist. Reviewed by: kib, markj Differential Revision: https://reviews.freebsd.org/D21456 Notes: svn path=/head/; revision=351796
* posixshm: switch to OBJT_SWAP in advance of other changesKyle Evans2019-09-011-1/+1
| | | | | | | | | | | | | | | | | Future changes to posixshm will start tracking writeable mappings in order to support file sealing. Tracking writeable mappings for an OBJT_DEFAULT object is complicated as it may be swapped out and converted to an OBJT_SWAP. One may generically add this tracking for vm_object, but this is difficult to do without increasing memory footprint of vm_object and blowing up memory usage by a significant amount. On the other hand, the swap pager can be expanded to track writeable mappings without increasing vm_object size. This change is currently in D21456. Switch over to OBJT_SWAP in advance of the other changes to the swap pager and posixshm. Notes: svn path=/head/; revision=351650
* Wire pages in vm_page_grab() when appropriate.Mark Johnston2019-08-281-4/+3
| | | | | | | | | | | | | | | | | | | | | | | uiomove_object_page() and exec_map_first_page() would previously wire a page after having grabbed it. Ask vm_page_grab() to perform the wiring instead: this removes some redundant code, and is cheaper in the case where the requested page is not resident since the page allocator can be asked to initialize the page as wired, whereas a separate vm_page_wire() call requires the page lock. In vm_imgact_hold_page(), use vm_page_unwire_noq() instead of vm_page_unwire(PQ_NONE). The latter ensures that the page is dequeued before returning, but this is unnecessary since vm_page_free() will trigger a batched dequeue of the page. Reviewed by: alc, kib Tested by: pho (part of a larger patch) MFC after: 1 week Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D21440 Notes: svn path=/head/; revision=351569
* kern_shm_open: push O_CLOEXEC into caller controlKyle Evans2019-07-311-2/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | The motivation for this change is to allow wrappers around shm to be written that don't set CLOEXEC. kern_shm_open currently accepts O_CLOEXEC but sets it unconditionally. kern_shm_open is used by the shm_open(2) syscall, which is mandated by POSIX to set CLOEXEC, and CloudABI's sys_fd_create1(). Presumably O_CLOEXEC is intended in the latter caller, but it's unclear from the context. sys_shm_open() now unconditionally sets O_CLOEXEC to meet POSIX requirements, and a comment has been dropped in to kern_fd_open() to explain the situation and add a pointer to where O_CLOEXEC setting is maintained for shm_open(2) correctness. CloudABI's sys_fd_create1() also unconditionally sets O_CLOEXEC to match previous behavior. This also has the side-effect of making flags correctly reflect the O_CLOEXEC status on this fd for the rest of kern_shm_open(), but a glance-over leads me to believe that it didn't really matter. Reviewed by: kib, markj MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D21119 Notes: svn path=/head/; revision=350464
* Avoid relying on header pollution from sys/refcount.h.Mark Johnston2019-07-291-0/+1
| | | | | | | | MFC after: 3 days Sponsored by: The FreeBSD Foundation Notes: svn path=/head/; revision=350421
* Merge the vm_page hold and wire mechanisms.Mark Johnston2019-07-081-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The hold_count and wire_count fields of struct vm_page are separate reference counters with similar semantics. The remaining essential differences are that holds are not counted as a reference with respect to LRU, and holds have an implicit free-on-last unhold semantic whereas vm_page_unwire() callers must explicitly determine whether to free the page once the last reference to the page is released. This change removes the KPIs which directly manipulate hold_count. Functions such as vm_fault_quick_hold_pages() now return wired pages instead. Since r328977 the overhead of maintaining LRU for wired pages is lower, and in many cases vm_fault_quick_hold_pages() callers would swap holds for wirings on the returned pages anyway, so with this change we remove a number of page lock acquisitions. No functional change is intended. __FreeBSD_version is bumped. Reviewed by: alc, kib Discussed with: jeff Discussed with: jhb, np (cxgbe) Tested by: pho (previous version) Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D19247 Notes: svn path=/head/; revision=349846
* Remove TODO comment after posixshmcontrol(1) added.Konstantin Belousov2019-05-301-10/+4
| | | | | | | | Sponsored by: The FreeBSD Foundation MFC after: 3 days Notes: svn path=/head/; revision=348433
* Add a kern.ipc.posix_shm_list sysctl.Konstantin Belousov2019-05-231-11/+67
| | | | | | | | | | | | | | | | The sysctl provides the listing on named linked posix shared memory segments existing in the system. Reuse shm_fill_kinfo() for filling individual struct kinfo_file. Remove unneeded lock around reading of shmfd->shm_mode. Reviewed by: jilles, tmunro Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D20258 Notes: svn path=/head/; revision=348158
* Report ref count of the backing object as st_nlink for posix shm fd.Konstantin Belousov2019-05-231-0/+1
| | | | | | | | | | | | | Unless there are transient references to the object, the ref count is equal to the number of the shared memory segment mappings plus one. Reviewed by: jilles, tmunro Sponsored by: The FreeBSD Foundation MFC after: 1 week Differential revision: https://reviews.freebsd.org/D20258 Notes: svn path=/head/; revision=348157
* Allow FIONBIO and FIOASYNC ioctls on POSIX shm descriptors.Mark Johnston2019-02-281-1/+21
| | | | | | | | | | | | | They have no effect, as with filesystem file descriptors. This improves compatibility with some existing userspace code. Submitted by: Greg V <greg@unrelenting.technology> Reviewed by: kib MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D19330 Notes: svn path=/head/; revision=344670
* Remove unused argument to priv_check_cred.Mateusz Guzik2018-12-111-1/+1
| | | | | | | | | | | | | | | | Patch mostly generated with cocinnelle: @@ expression E1,E2; @@ - priv_check_cred(E1,E2,0) + priv_check_cred(E1,E2) Sponsored by: The FreeBSD Foundation Notes: svn path=/head/; revision=341827
* uipc_shm: use unr64 for inode numbersMateusz Guzik2018-11-211-11/+3
| | | | | | | Sponsored by: The FreeBSD Foundation Notes: svn path=/head/; revision=340747
* sys/kern: adoption of SPDX licensing ID tags.Pedro F. Giffuni2017-11-271-0/+2
| | | | | | | | | | | | | | | Mainly focus on files that use BSD 2-Clause license, however the tool I was using misidentified many licenses so this was mostly a manual - error prone - task. The Software Package Data Exchange (SPDX) group provides a specification to make it easier for automated tools to detect and summarize well known opensource licenses. We are gradually adopting the specification, noting that the tags are considered only advisory and do not, in any way, superceed or replace the license texts. Notes: svn path=/head/; revision=326271
* Replace manyinstances of VM_WAIT with blocking page allocation flagsJeff Roberson2017-11-081-6/+3
| | | | | | | | | | | | | | | | | | | similar to the kernel memory allocator. This simplifies NUMA allocation because the domain will be known at wait time and races between failure and sleeping are eliminated. This also reduces boilerplate code and simplifies callers. A wait primitive is supplied for uma zones for similar reasons. This eliminates some non-specific VM_WAIT calls in favor of more explicit sleeps that may be satisfied without new pages. Reviewed by: alc, kib, markj Tested by: pho Sponsored by: Netflix, Dell/EMC Isilon Notes: svn path=/head/; revision=325530