aboutsummaryrefslogtreecommitdiff
path: root/sys/amd64/include
Commit message (Collapse)AuthorAgeFilesLines
* x86 atomics: Remove unused WANT_FUNCTIONSOlivier Certner2025-01-171-3/+0
| | | | | | | | | | | | This macro has not been in use since commit "inline atomics and allow tied modules to inline locks" (r335873, f4b3640475cec929). Reviewed by: markj, kib, emaste, imp MFC after: 5 days Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D48061 (cherry picked from commit fa368cc86cebe7185b3a99d4f6083033da377eee)
* atomics: Constify loadsOlivier Certner2025-01-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | In order to match reality, allow using these functions with pointers on const objects, and bring us closer to C11. Remove the '+' modifier in the atomic_load_acq_64_i586()'s inline asm statement's constraint for '*p' (the value to load). CMPXCHG8B always writes back some value, even when the value exchange does not happen in which case what was read is written back. atomic_load_acq_64_i586() further takes care of the operation atomically writing back the same value that was read in any case. All in all, this makes the inline asm's write back undetectable by any other code, whether executing on other CPUs or code on the same CPU before and after the call to atomic_load_acq_64_i586(), except for the fact that CMPXCHG8B will trigger a #GP(0) if the memory address is part of a read-only mapping. This unfortunate property is however out of scope of the C abstract machine, and in particular independent of whether the 'uint64_t' pointed to is declared 'const' or not. Approved by: markj (mentor) MFC after: 5 days Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D46887 (cherry picked from commit 5e9a82e898d55816c366cfa3ffbca84f02569fe5)
* amd64 pcb.h: use 4 hex digits for pcb flagsKonstantin Belousov2024-02-141-8/+8
| | | | (cherry picked from commit 5f7ac491eef4994b23b4de250927a85c69a64a31)
* vmm.h: remove dup declarationKonstantin Belousov2023-12-251-2/+0
| | | | (cherry picked from commit 7c8f16318499d2b05e916abd66148e5409284a9d)
* sys: Remove $FreeBSD$: one-line .c comment patternWarner Losh2023-08-2336-36/+0
| | | | | | | Remove /^/[*/]\s*\$FreeBSD\$.*\n/ Similar commit in current: (cherry picked from commit 71625ec9ad2a)
* sys: Remove $FreeBSD$: one-line .h patternWarner Losh2023-08-2317-17/+0
| | | | | | | Remove /^\s*\*+\s*\$FreeBSD\$.*$\n/ Similar commit in current: (cherry picked from commit 2ff63af9b88c)
* sys: Remove $FreeBSD$: two-line .h patternWarner Losh2023-08-2337-74/+0
| | | | | | | Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/ Similar commit in current: (cherry picked from commit 95ee2897e98f)
* bhyve: fix vCPU single-stepping on VMXBojan Novković2023-08-171-0/+1
| | | | | | | | | | | | | | | | | | | | | | | This patch fixes virtual machine single stepping on VMX hosts. Currently, when using bhyve's gdb stub, each attempt at single-stepping a vCPU lands in a timer interrupt. The current single-stepping mechanism uses the Monitor Trap Flag feature to cause VMEXIT after a single instruction is executed. Unfortunately, the SDM states that MTF causes VMEXITs for the next instruction that gets executed, which is often not what the person using the debugger expects. [1] This patch adds a new VM capability that masks interrupts on a vCPU by blocking interrupt injection and modifies the gdb stub to use the newly added capability while single-stepping a vCPU. [1] Intel SDM 26.5.2 Vol. 3C Reviewed by: corvink, jbh MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D39949 (cherry picked from commit fefac543590db4e1461235b7c936f46026d0f318)
* spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSDWarner Losh2023-07-2526-26/+26
| | | | | | | | | | | The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch up to that fact and revert to their recommended match of BSD-2-Clause. Discussed with: pfg MFC After: 3 days Sponsored by: Netflix (cherry picked from commit 4d846d260e2b9a3d4d0a701462568268cbfe7a5b)
* hwpmc: use kstack_contains()Mitchell Horne2023-06-091-5/+3
| | | | | | | | | | | | | | | | This existing helper function is preferable to the hand-rolled calculation of the kstack bounds. Make some small style improvements while here. Notably, rename every instance of "r", the return address, to "ra". Tidy the includes in the affected files. Reviewed by: jkoshy MFC after: 2 weeks Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D39909 (cherry picked from commit aba91805aa92a47b2f3f01741a55ff9f07c42d04)
* amd64: fix PKRU and swapout interactionKonstantin Belousov2023-04-211-0/+1
| | | | (cherry picked from commit 1e0e335b0f0dbae8ce49307377b23ef3673bd402)
* bhyve: fix restore of kernel structsVitaliy Gusev2023-03-172-4/+1
| | | | | | | | | | | | | | | | | vmx_snapshot() and svm_snapshot() do not save any data and error occurs at resume: Restoring kernel structs... vm_restore_kern_struct: Kernel struct size was 0 for: vmx Failed to restore kernel structs. Reviewed by: corvink, markj Fixes: 39ec056e6dbd89e26ee21d2928dbd37335de0ebc ("vmm: Rework snapshotting of CPU-specific per-vCPU data.") MFC after: 2 weeks Sponsored by: vStack Differential Revision: https://reviews.freebsd.org/D38476 (cherry picked from commit 8104fc31a234bad1ba68910f66876395fc58ebdc)
* Unstaticize {get,set}_fpcontext() on amd64Edward Tomasz Napierala2023-02-091-0/+5
| | | | | | | | | This will be used to fix Linux signal delivery. Discussed With: kib Sponsored By: EPSRC (cherry picked from commit 562bc0a943d1fad1a9b551557609d2941a851b4d)
* vmm: avoid spurious rendezvousCorvin Köhne2023-02-081-4/+10
| | | | | | | | | | | | | | | | | | | | | | | | A vcpu only checks if a rendezvous is in progress or not to decide if it should handle a rendezvous. This could lead to spurios rendezvous where a vcpu tries a handle a rendezvous it isn't part of. This situation is properly handled by vm_handle_rendezvous but it could potentially degrade the performance. Avoid that by an early check if the vcpu is part of the rendezvous or not. At the moment, rendezvous are only used to spin up application processors and to send ioapic interrupts. Spinning up application processors is done in the guest boot phase by sending INIT SIPI sequences to single vcpus. This is known to cause spurious rendezvous and only occurs in the boot phase. Sending ioapic interrupts is rare because modern guest will use msi and the rendezvous is always send to all vcpus. Reviewed by: jhb MFC after: 1 week Sponsored by: Beckhoff Automation GmbH & Co. KG Differential Revision: https://reviews.freebsd.org/D37390 (cherry picked from commit 892feec2211d0dbd58252a34d78dbcb2d5dd7593)
* vmm: Don't lock a vCPU for VM_PPTDEV_MSI[X].John Baldwin2023-01-261-2/+2
| | | | | | | | | | | These are manipulating state in a ppt(4) device none of which is vCPU-specific. Mark the vcpu fields in the relevant ioctl structures as unused, but don't remove them for now. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37639 (cherry picked from commit 91980db1beecd52e34a1550a247e374cfcc746a2)
* vmm: Remove stale comment for vm_rendezvous.John Baldwin2023-01-261-3/+0
| | | | | | | | | | Support for rendezvous outside of a vcpu context (vcpuid of -1) was removed in commit 949f0f47a4e7, and the vm, vcpuid argument pair was replaced by a single struct vcpu pointer in commit d8be3d523dd5. Reported by: andrew (cherry picked from commit 1f6db5d6b5de5e0cafcdb141a988120b0faea049)
* vmm: Convert VM_MAXCPU into a loader tunable hw.vmm.maxcpu.John Baldwin2023-01-261-4/+2
| | | | | | | | | | The default is now the number of physical CPUs in the system rather than 16. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37175 (cherry picked from commit ee98f99d7a68b284a669fefb969cbfc31df2d0ab)
* vmm: Allocate vCPUs on first use of a vCPU.John Baldwin2023-01-261-0/+4
| | | | | | | | | | | | | | | | | | | Convert the vcpu[] array in struct vm to an array of pointers and allocate vCPUs on first use. This avoids always allocating VM_MAXCPU vCPUs for each VM, but instead only allocates the vCPUs in use. A new per-VM sx lock is added to serialize attempts to allocate vCPUs on first use. However, a given vCPU is never freed while the VM is active, so the pointer is read via an unlocked read first to avoid the need for the lock in the common case once the vCPU has been created. Some ioctls need to lock all vCPUs. To prevent races with ioctls that want to allocate a new vCPU, these ioctls also lock the sx lock that protects vCPU creation. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37174 (cherry picked from commit 98568a005a193ce2c37702a8377ddd10c570e452)
* vmm: Use a cpuset_t for vCPUs waiting for STARTUP IPIs.John Baldwin2023-01-261-0/+3
| | | | | | | | | | | | | | | | | Retire the boot_state member of struct vlapic and instead use a cpuset in the VM to track vCPUs waiting for STARTUP IPIs. INIT IPIs add vCPUs to this set, and STARTUP IPIs remove vCPUs from the set. STARTUP IPIs are only reported to userland for vCPUs that were removed from the set. In particular, this permits a subsequent change to allocate vCPUs on demand when the vCPU may not be allocated until after a STARTUP IPI is reported to userland. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37173 (cherry picked from commit c0f35dbf19c3c8825bd2b321d8efd582807d1940)
* vmm: Use an sx lock to protect the memory map.John Baldwin2023-01-261-0/+3
| | | | | | | | | | | | | | | | Previously bhyve obtained a "read lock" on the memory map for ioctls needing to read the map by locking the last vCPU. This is now replaced by a new per-VM sx lock. Modifying the map requires exclusively locking the sx lock as well as locking all existing vCPUs. Reading the map requires either locking one vCPU or the sx lock. This permits safely modifying or querying the memory map while some vCPUs do not exist which will be true in a future commit. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37172 (cherry picked from commit 67b69e76e8eecfd204f6de636d622a1d681c8d7e)
* vmm: Lookup vcpu pointers in vmmdev_ioctl.John Baldwin2023-01-261-15/+14
| | | | | | | | | | | | | Centralize mapping vCPU IDs to struct vcpu objects in vmmdev_ioctl and pass vcpu pointers to the routines in vmm.c. For operations that want to perform an action on all vCPUs or on a single vCPU, pass pointers to both the VM and the vCPU using a NULL vCPU pointer to request global actions. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37168 (cherry picked from commit 3f0f4b1598e0e7005bebed7ea3458e96d0fb8e2f)
* vmm: Use struct vcpu in the rendezvous code.John Baldwin2023-01-261-2/+2
| | | | | | | Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37165 (cherry picked from commit d8be3d523dd50a17f48957c1bb2e0cd7bbf02cab)
* vmm: Restore the correct vm_inject_*() prototypesMark Johnston2023-01-261-8/+8
| | | | | | | | Fixes: 80cb5d845b8f ("vmm: Pass vcpu instead of vm and vcpuid...") Reviewed by: jhb Differential Revision: https://reviews.freebsd.org/D37443 (cherry picked from commit ca6b48f08034114edf1fa19cdc088021af2eddf3)
* vmm: Pass vcpu instead of vm and vcpuid to APIs used from CPU backends.John Baldwin2023-01-261-30/+28
| | | | | | | Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37162 (cherry picked from commit 80cb5d845b8f4b7dc25b5dc7f4a9a653b98b0cc6)
* vmm: Use struct vcpu in the instruction emulation code.John Baldwin2023-01-262-19/+64
| | | | | | | | | | | | | | | | | This passes struct vcpu down in place of struct vm and and integer vcpu index through the in-kernel instruction emulation code. To minimize userland disruption, helper macros are used for the vCPU arguments passed into and through the shared instruction emulation code. A few other APIs used by the instruction emulation code have also been updated to accept struct vcpu in the kernel including vm_get/set_register and vm_inject_fault. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37161 (cherry picked from commit d3956e46736ffaee5060c9baf0a40f428bc34ec3)
* vmm: Add vm_gpa_hold_global wrapper function.John Baldwin2023-01-261-0/+2
| | | | | | | | | | | | This handles the case that guest pages are being held not on behalf of a virtual CPU but globally. Previously this was handled by passing a vcpuid of -1 to vm_gpa_hold, but that will not work in the future when vm_gpa_hold is changed to accept a struct vcpu pointer. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37160 (cherry picked from commit 28b561ad9d03617418aed33b9b8c1311e940f0c8)
* bhyve: Remove unused vm and vcpu arguments from vm_copy routines.John Baldwin2023-01-261-6/+3
| | | | | | | | | | The arguments identifying the VM and vCPU are only needed for vm_copy_setup. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37158 (cherry picked from commit 2b4fe856f44ded02f3450bac1782bb49b60b7dd5)
* vmm: Use struct vcpu with the vmm_stat API.John Baldwin2023-01-261-1/+1
| | | | | | | | | The function callbacks still use struct vm and and vCPU index. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37157 (cherry picked from commit 3dc3d32ad67b38ab44ed4a7cf3020a0741b47ec1)
* vmm: Expose struct vcpu as an opaque type.John Baldwin2023-01-261-1/+6
| | | | | | | | | | | | | Pass a pointer to the current struct vcpu to the vcpu_init callback and save this pointer in the CPU-specific vcpu structures. Add routines to fetch a struct vcpu by index from a VM and to query the VM and vcpuid from a struct vcpu. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37156 (cherry picked from commit 950af9ffc616ee573a1ce6ef0c841e897b13dfc4)
* vmm: Remove the per-vm cookie argument from vmmops taking a vcpu.John Baldwin2023-01-261-17/+12
| | | | | | | | | | | | | This requires storing a reference to the per-vm cookie in the CPU-specific vCPU structure. Take advantage of this new field to remove no-longer-needed function arguments in the CPU-specific backends. In particular, stop passing the per-vm cookie to functions that either don't use it or only use it for KTR traces. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37152 (cherry picked from commit 869c8d1946eb4feb8ad651abdf87af0e5c0111b4)
* vmm: Refactor storage of CPU-dependent per-vCPU data.John Baldwin2023-01-261-10/+14
| | | | | | | | | | | | | | | | | Rather than storing static arrays of per-vCPU data in the CPU-specific per-VM structure, adopt a more dynamic model similar to that used to manage CPU-specific per-VM data. That is, add new vmmops methods to init and cleanup a single vCPU. The init method returns a pointer that is stored in 'struct vcpu' as a cookie pointer. This cookie pointer is now passed to other vmmops callbacks in place of the integer index. The index is now only used in KTR traces and when calling back into the CPU-independent layer. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37151 (cherry picked from commit 1aa5150479bf35c90c6770e6ea90e8462cfb6bf9)
* vmm: Rework snapshotting of CPU-specific per-vCPU data.John Baldwin2023-01-261-2/+2
| | | | | | | | | | | | | Previously some per-vCPU state was saved in vmmops_snapshot and other state was saved in vmmops_vcmx_snapshot. Consolidate all per-vCPU state into the latter routine and rename the hook to the more generic 'vcpu_snapshot'. Note that the CPU-independent per-vCPU data is still stored in a separate blob as well as the per-vCPU local APIC data. Reviewed by: corvink, markj Differential Revision: https://reviews.freebsd.org/D37150 (cherry picked from commit 39ec056e6dbd89e26ee21d2928dbd37335de0ebc)
* amd64 pmap.h: make it easier to use the header for other consumersKonstantin Belousov2023-01-201-0/+2
| | | | | | Tested by: pho (cherry picked from commit ad97b9bbfccdb36f17788033903b1dbf508fcb96)
* amd64: be more precise when enabling the AlderLake small core PCID workaroundKonstantin Belousov2023-01-201-0/+1
| | | | | | Tested by: pho (cherry picked from commit a2c08eba43a2c0ebeac7117f708fb9392022a300)
* amd64: for small cores, use (big hammer) INVPCID_CTXGLOB instead of INVLPGKonstantin Belousov2023-01-202-1/+22
| | | | | | | PR: 261169, 266145 Tested by: pho (cherry picked from commit cde70e312c3fde5b37a29be1dacb7fde9a45b94a)
* amd64: identify small coresKonstantin Belousov2023-01-201-1/+2
| | | | | | Tested by: pho (cherry picked from commit 45ac7755a7c5d8508176b3d015bb27ff58485c80)
* vmm: permit some IPIs to be handled by userspaceCorvin Köhne2022-12-091-0/+8
| | | | | | | | | | | | | Add VM_EXITCODE_IPI to permit returning unhandled IPIs to userland. INIT and STARTUP IPIs are now returned to userland. Due to backward compatibility reasons, a new capability is added for enabling VM_EXITCODE_IPI. Reviewed by: jhb Differential Revision: https://reviews.freebsd.org/D35623 Sponsored by: Beckhoff Automation GmbH & Co. KG (cherry picked from commit 0bda8d3e9f7a5c04881219723436616b23041e5f)
* bhyve: Drop volatile qualifiers from snapshot codeMark Johnston2022-11-291-5/+5
| | | | | | | | | | | They accomplish nothing since the qualifier is casted away in calls to memcpy() and copyin()/copyout(). No functional change intended. MFC after: 2 weeks Reviewed by: corvink, jhb Differential Revision: https://reviews.freebsd.org/D37292 (cherry picked from commit 8b1adff8bcbdf0e58878431c6ed5a14553178d4d)
* Simplify kernel sanitizer interceptorsMark Johnston2022-11-141-10/+2
| | | | | | | | | | | | | | | | | | | | | KASAN and KCSAN implement interceptors for various primitive operations that are not instrumented by the compiler. KMSAN requires them as well. Rather than adding new cases for each sanitizer which requires interceptors, implement the following protocol: - When interceptor definitions are required, define SAN_NEEDS_INTERCEPTORS and SANITIZER_INTERCEPTOR_PREFIX. - In headers that declare functions which need to be intercepted by a sanitizer runtime, use SANITIZER_INTERCEPTOR_PREFIX to provide declarations. - When SAN_RUNTIME is defined, do not redefine the names of intercepted functions. This is typically the case in files which implement sanitizer runtimes but is also needed in, for example, files which define ifunc selectors for intercepted operations. MFC after: 2 weeks Sponsored by: The FreeBSD Foundation (cherry picked from commit a90d053b84223a4e5cb65852a9b6193570ab1c7d)
* vmm: add tunable to trap WBINVDCorvin Köhne2022-06-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | x86 is cache coherent. However, there are special cases where cache coherency isn't ensured (e.g. when switching the caching mode). In these cases, WBINVD can be used. WBINVD writes all cache lines back into main memory and invalidates the whole cache. Due to the invalidation of the whole cache, WBINVD is a very heavy instruction and degrades the performance on all cores. So, we should minimize the use of WBINVD as much as possible. In a virtual environment, the WBINVD call is mostly useless. The guest isn't able to break cache coherency because he can't switch the physical cache mode. When using pci passthrough WBINVD might be useful. Nevertheless, trapping and ignoring WBINVD is an unsafe operation. For that reason, we implement it as tunable. Reviewed by: jhb Sponsored by: Beckhoff Automation GmbH & Co. KG MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D35253 (cherry picked from commit 3ba952e1a2179c232402c82d5c7587159b15a8dd)
* x86: Remove silly checks for <sys/cdefs.h>.John Baldwin2022-05-131-4/+0
| | | | | | | | | | | | | | These headers #include <sys/cdefs.h> right after checking if it has already been #included. The nested #include already existed when the check for _SYS_CDEFS_H_ was added, so the check shouldn't have been added in the first place. PR: 263102 (exp-run) Reported by: brooks Reviewed by: brooks, imp, emaste Differential Revision: https://reviews.freebsd.org/D34796 (cherry picked from commit 1c1bf5bd7c1e479a7889839b941f53e689aa2569)
* Create sys/reg.h for the common code previously in machine/reg.hAndrew Turner2022-05-121-1/+0
| | | | | | | | | | | | Move the common kernel function signatures from machine/reg.h to a new sys/reg.h. This is in preperation for adding PT_GETREGSET to ptrace(2). Reviewed by: imp, markj Sponsored by: DARPA, AFRL (original work) Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D19830 (cherry picked from commit b792434150d66b9b2356fb9a7548f4c7f0a0f16c)
* bhyve: Remove VM_MAXCPU from the userspace API/ABI.John Baldwin2022-05-111-0/+2
| | | | | | | Reviewed by: grehan Differential Revision: https://reviews.freebsd.org/D34494 (cherry picked from commit f1d450ddee669f1e6fef7aefdf8102fc518eef75)
* Extend the VMM stats interface to support a dynamic count of statistics.John Baldwin2022-04-291-0/+1
| | | | | | | | | | | | | | | - Add a starting index to 'struct vmstats' and change the VM_STATS ioctl to fetch the 64 stats starting at that index. A compat shim for <= 13 continues to fetch only the first 64 stats. - Extend vm_get_stats() in libvmmapi to use a loop and a static thread local buffer which grows to hold the stats needed. Reviewed by: markj Differential Revision: https://reviews.freebsd.org/D27463 (cherry picked from commit 64269786170ffd8e3348edea0fc5f5b09b79391e)
* Simplify swi for bus_dma.John Baldwin2022-04-291-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | When a DMA request using bounce pages completes, a swi is triggered to schedule pending DMA requests using the just-freed bounce pages. For a long time this bus_dma swi has been tied to a "virtual memory" swi (swi_vm). However, all of the swi_vm implementations are the same and consist of checking a flag (busdma_swi_pending) which is always true and if set calling busdma_swi. I suspect this dates back to the pre-SMPng days and that the intention was for swi_vm to serve as a mux. However, in the current scheme there's no need for the mux. Instead, remove swi_vm and vm_ih. Each bus_dma implementation that uses bounce pages is responsible for creating its own swi (busdma_ih) which it now schedules directly. This swi invokes busdma_swi directly removing the need for busdma_swi_pending. One consequence is that the swi now works on RISC-V which had previously failed to invoke busdma_swi from swi_vm. Reviewed by: imp, kib Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D33447 (cherry picked from commit 254e4e5b77d7788c46333ae35d5e9f347e22c746)
* Add <machine/tls.h> header to hold MD constants and helpers for TLS.John Baldwin2022-04-291-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The header exports the following: - Definition of struct tcb. - Helpers to get/set the tcb for the current thread. - TLS_TCB_SIZE (size of TCB) - TLS_TCB_ALIGN (alignment of TCB) - TLS_VARIANT_I or TLS_VARIANT_II - TLS_DTV_OFFSET (bias of pointers in dtv[]) - TLS_TP_OFFSET (bias of "thread pointer" relative to TCB) Note that TLS_TP_OFFSET does not account for if the unbiased thread pointer points to the start of the TCB (arm and x86) or the end of the TCB (MIPS, PowerPC, and RISC-V). Note also that for amd64, the struct tcb does not include the unused tcb_spare field included in the current structure in libthr. libthr does not use this field, and the existing calls in libc and rtld that allocate a TCB for amd64 assume it is the size of 3 Elf_Addr's (and thus do not allocate room for tcb_spare). A <sys/_tls_variant_i.h> header is used by architectures using Variant I TLS which uses a common struct tcb. Reviewed by: kib (older version of x86/tls.h), jrtc27 Sponsored by: The University of Cambridge, Google Inc. Differential Revision: https://reviews.freebsd.org/D33351 For stable/13 only, sys/arm/include/tls.h includes support for ARM_TP_ADDRESS which is not present in main. (cherry picked from commit 1a62e9bc0046bfe20f4dd785561e469ff73fd508)
* smbios: support getting address from EFIGreg V2022-03-031-0/+1
| | | | | | | | | | | | | | | On some systems (e.g. Lenovo ThinkPad X240, Apple MacBookPro12,1) the SMBIOS entry point is not found in the <0xFFFFF space. Follow the SMBIOS spec and use the EFI Configuration Table for locating the entry point on EFI systems. Reviewed by: rpokala, dab MFC after: 1 week Sponsored by: Dell EMC Isilon Differential Revision: https://reviews.freebsd.org/D29276 (cherry picked from commit a29bff7a5216bd5f4a76228788e7eacf235004de)
* smbios: Move smbios driver out from x86 machdep codeAllan Jude2022-03-031-32/+0
| | | | | | | | | | | Add it to the x86 GENERIC and MINIMAL kernels Sponsored by: Ampere Computing LLC Submitted by: Klara Inc. Reviewed by: rpokala Differential Revision: https://reviews.freebsd.org/D28738 (cherry picked from commit d0673fe160b04f8162f380926d455dfb966f08fb)
* x86 atomic.h: remove obsoleted commentKonstantin Belousov2022-02-111-8/+0
| | | | (cherry picked from commit 9596b349bb57e50a2baec8497ced9f712f08f147)
* x86 atomics: use lock prefix unconditionallyKonstantin Belousov2022-02-111-51/+16
| | | | (cherry picked from commit 9c0b759bf9b520537616d026f21a0a98d70acd11)