aboutsummaryrefslogtreecommitdiff
path: root/contrib/llvm-project/compiler-rt
diff options
context:
space:
mode:
authorDimitry Andric <dim@FreeBSD.org>2020-08-06 19:11:24 +0000
committerDimitry Andric <dim@FreeBSD.org>2020-08-06 19:11:24 +0000
commit0faeaeed40a4c42a778a088cbdad0bc54468eef4 (patch)
tree62d084de9ed8b89c9a882bac12f64d05b109cf09 /contrib/llvm-project/compiler-rt
parente8141ad1df2098dd3f6d627c49ce99307315f5fd (diff)
downloadsrc-0faeaeed40a4c42a778a088cbdad0bc54468eef4.tar.gz
src-0faeaeed40a4c42a778a088cbdad0bc54468eef4.zip
r356104 | jhibbits | 2019-12-27 00:06:28 +0100 (Fri, 27 Dec 2019) | 25 lines
[PowerPC] enable atomic.c in compiler_rt and do not check and forces lock/lock_free decisions in compiled time Summary: Enables atomic.c in compiler_rt and forces clang to not emit a call for runtime decision about lock/lock_free. At compiling time, if clang can't decide if atomic operation can be lock free, it emits calls to external functions like `__atomic_is_lock_free`, `__c11_atomic_is_lock_free` and `__atomic_always_lock_free`, postponing decision to a runtime check. According to LLVM code documentation, the mechanism exists due to differences between x86_64 processors that can't be decided at runtime. On PowerPC and PowerPCSPE (32 bits), we already know in advance it can't be lock free, so we force the decision at compile time and avoid having to implement it in an external library. This patch was made after 32 bit users testing the PowePC32 bit ISO reported llvm could not be compiled with in-base llvm due to `__atomic_load8` not implemented. Submitted by: alfredo.junior_eldorado.org.br Reviewed by: jhibbits, dim Differential Revision: https://reviews.freebsd.org/D22549
Notes
Notes: svn path=/projects/clang1100-import/; revision=363977
Diffstat (limited to 'contrib/llvm-project/compiler-rt')
-rw-r--r--contrib/llvm-project/compiler-rt/lib/builtins/atomic.c13
1 files changed, 10 insertions, 3 deletions
diff --git a/contrib/llvm-project/compiler-rt/lib/builtins/atomic.c b/contrib/llvm-project/compiler-rt/lib/builtins/atomic.c
index 8634a72e77d1..2a69101fbcee 100644
--- a/contrib/llvm-project/compiler-rt/lib/builtins/atomic.c
+++ b/contrib/llvm-project/compiler-rt/lib/builtins/atomic.c
@@ -120,13 +120,20 @@ static __inline Lock *lock_for_pointer(void *ptr) {
return locks + (hash & SPINLOCK_MASK);
}
-/// Macros for determining whether a size is lock free. Clang can not yet
-/// codegen __atomic_is_lock_free(16), so for now we assume 16-byte values are
-/// not lock free.
+/// Macros for determining whether a size is lock free.
#define IS_LOCK_FREE_1 __c11_atomic_is_lock_free(1)
#define IS_LOCK_FREE_2 __c11_atomic_is_lock_free(2)
#define IS_LOCK_FREE_4 __c11_atomic_is_lock_free(4)
+
+/// 32 bit PowerPC doesn't support 8-byte lock_free atomics
+#if !defined(__powerpc64__) && defined(__powerpc__)
+#define IS_LOCK_FREE_8 0
+#else
#define IS_LOCK_FREE_8 __c11_atomic_is_lock_free(8)
+#endif
+
+/// Clang can not yet codegen __atomic_is_lock_free(16), so for now we assume
+/// 16-byte values are not lock free.
#define IS_LOCK_FREE_16 0
/// Macro that calls the compiler-generated lock-free versions of functions