1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
|
From 12817793301398241b6cb00c740f0d3ca41076e9 Mon Sep 17 00:00:00 2001
From: Rich Felker <dalias@aerifal.cx>
Date: Fri, 14 Sep 2018 10:47:16 -0400
Subject: fix broken atomic store on powerpc[64]
in our memory model, all atomics are supposed to be full barriers;
stores are not release-only. this is important because store is used
as an unlock operation in places where it needs to acquire the waiter
count to determine if a futex wake is needed. at least in the
malloc-internal locks, but possibly elsewhere, soft deadlocks from
missing futex wake (breakable by poking the threads to restart the
syscall, e.g. by attaching a tracer) were reported to occur.
once the malloc lock is replaced with Jens Gustedt's new lock
implementation (see commit 47d0bcd4762f223364e5b58d5a381aaa0cbd7c38),
malloc will not be affected by the issue, but it's not clear that
other uses won't be. reducing the strength of the ordering properties
required from a_store would require a thorough analysis of how it's
used.
to fix the problem, I'm removing the powerpc[64]-specific a_store
definition; now, the top-level atomic.h will implement a_store using
a_barrier on both sides of the store.
it's not clear to me yet whether there might be issues with the other
atomics. it's possible that a_post_llsc needs to be replaced with a
full barrier to guarantee the formal semanics we want, but either way
I think the difference is unlikely to impact the way we use them.
---
arch/powerpc/atomic_arch.h | 8 --------
arch/powerpc64/atomic_arch.h | 8 --------
2 files changed, 16 deletions(-)
diff --git a/arch/powerpc/atomic_arch.h b/arch/powerpc/atomic_arch.h
index 5b65cde7..c2673919 100644
--- a/arch/powerpc/atomic_arch.h
+++ b/arch/powerpc/atomic_arch.h
@@ -30,14 +30,6 @@ static inline void a_post_llsc()
__asm__ __volatile__ ("isync" : : : "memory");
}
-#define a_store a_store
-static inline void a_store(volatile int *p, int v)
-{
- a_pre_llsc();
- *p = v;
- a_post_llsc();
-}
-
#define a_clz_32 a_clz_32
static inline int a_clz_32(uint32_t x)
{
diff --git a/arch/powerpc64/atomic_arch.h b/arch/powerpc64/atomic_arch.h
index 17cababd..2bed82be 100644
--- a/arch/powerpc64/atomic_arch.h
+++ b/arch/powerpc64/atomic_arch.h
@@ -48,14 +48,6 @@ static inline void a_post_llsc()
__asm__ __volatile__ ("isync" : : : "memory");
}
-#define a_store a_store
-static inline void a_store(volatile int *p, int v)
-{
- a_pre_llsc();
- *p = v;
- a_post_llsc();
-}
-
#define a_crash a_crash
static inline void a_crash()
{
--
cgit v1.2.1
|