Age | Commit message (Collapse) | Author | Files | Lines |
|
A variant of this new lock algorithm has been presented at SAC'16, see
https://hal.inria.fr/hal-01304108. A full version of that paper is
available at https://hal.inria.fr/hal-01236734.
The main motivation of this is to improve on the safety of the basic lock
implementation in musl. This is achieved by squeezing a lock flag and a
congestion count (= threads inside the critical section) into a single
int. Thereby an unlock operation does exactly one memory
transfer (a_fetch_add) and never touches the value again, but still
detects if a waiter has to be woken up.
This is a fix of a use-after-free bug in pthread_detach that had
temporarily been patched. Therefore this patch also reverts
c1e27367a9b26b9baac0f37a12349fc36567c8b6
This is also the only place where internal knowledge of the lock
algorithm is used.
The main price for the improved safety is a little bit larger code.
Under high congestion, the scheduling behavior will be different
compared to the previous algorithm. In that case, a successful
put-to-sleep may appear out of order compared to the arrival in the
critical section.
|
|
if a multithreaded program became non-multithreaded (i.e. all other
threads exited) while one thread held an internal lock, the remaining
thread would fail to release the lock. the the program then became
multithreaded again at a later time, any further attempts to obtain
the lock would deadlock permanently.
the underlying cause is that the value of libc.threads_minus_1 at
unlock time might not match the value at lock time. one solution would
be returning a flag to the caller indicating whether the lock was
taken and needs to be unlocked, but there is a simpler solution: using
the lock itself as such a flag.
note that this flag is not needed anyway for correctness; if the lock
is not held, the unlock code is harmless. however, the memory
synchronization properties associated with a_store are costly on some
archs, so it's best to avoid executing the unlock code when it is
unnecessary.
|
|
i did some testing trying to switch malloc to use the new internal
lock with priority inheritance, and my malloc contention test got
20-100 times slower. if priority inheritance futexes are this slow,
it's simply too high a price to pay for avoiding priority inversion.
maybe we can consider them somewhere down the road once the kernel
folks get their act together on this (and perferably don't link it to
glibc's inefficient lock API)...
as such, i've switch __lock to use malloc's implementation of
lightweight locks, and updated all the users of the code to use an
array with a waiter count for their locks. this should give optimal
performance in the vast majority of cases, and it's simple.
malloc is still using its own internal copy of the lock code because
it seems to yield measurably better performance with -O3 when it's
inlined (20% or more difference in the contention stress test).
|
|
this bug probably would have gone unnoticed since it's only used in
the fallback code for systems where priority-inheritance locking
fails. unfortunately this approach results in one spurious wake
syscall on the final unlock, when there are no waiters remaining. the
alternative (possibly better) would be to use broadcast wakes instead
of reflagging the waiter unconditionally, and let each waiter reflag
itself; this saves one syscall at the expense of invoking the
"thundering herd" effect (worse performance degredation) when there
are many waiters.
ideally we would be able to update all of our locks to use an array of
two ints rather than a single int, and use a separate counter system
like proper mutexes use; then we could avoid all spurious wake calls
without resorting to broadcasts. however, it's not clear to me that
priority inheritance futexes support this usage. the kernel sets the
waiters flag for them (just like we're doing now) and i can't tell if
it's safe to bypass the kernel when unlocking just because we know
(from private data, the waiter count) that there are no waiters. this
is something that could be explored in the future.
|
|
we use priority inheritance futexes if possible so that the library
cannot hit internal priority inversion deadlocks in the presence of
realtime priority scheduling (full support to be added later).
|
|
|
|
|
|
|
|
with this patch, the syscallN() functions are no longer needed; a
variadic syscall() macro allows syscalls with anywhere from 0 to 6
arguments to be made with a single macro name. also, manually casting
each non-integer argument with (long) is no longer necessary; the
casts are hidden in the macros.
some source files which depended on being able to define the old macro
SYSCALL_RETURNS_ERRNO have been modified to directly use __syscall()
instead of syscall(). references to SYSCALL_SIGSET_SIZE and SYSCALL_LL
have also been changed.
x86_64 has not been tested, and may need a follow-up commit to fix any
minor bugs/oversights.
|
|
|