Age | Commit message (Collapse) | Author | Files | Lines |
|
this reverts commit c0ed5a201b2bdb6d1896064bec0020c9973db0a1, which
was based on a mistaken reading of POSIX due to inconsistency between
the description (which requires return upon interruption by a signal)
and the errors list (which wrongly lists EINTR as "may fail").
since the previously-introduced behavior was a workaround for an old
kernel bug to ensure safety of correct programs that were not hardened
against the bug, an effort has been made to preserve it for programs
which do not use interrupting signal handlers. the stage for this was
set in commit a63c0104e496f7ba78b64be3cd299b41e8cd427f, which makes
the futex __timedwait backend suppress EINTR if it's seen when no
interrupting signal handlers have been installed.
based loosely on a patch submitted by Orivej Desh, but with
unnecessary additional changes removed.
|
|
the memory model we use internally for atomics permits plain loads of
values which may be subject to concurrent modification without
requiring that a special load function be used. since a compiler is
free to make transformations that alter the number of loads or the way
in which loads are performed, the compiler is theoretically free to
break this usage. the most obvious concern is with atomic cas
constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
multiple loads of *p whose resulting values might fail to be equal;
this would break the atomicity of the whole operation. but even more
fundamental breakage is possible.
with the changes being made now, objects that may be modified by
atomics are modeled as volatile, and the atomic operations performed
on them by other threads are modeled as asynchronous stores by
hardware which happens to be acting on the request of another thread.
such modeling of course does not itself address memory synchronization
between cores/cpus, but that aspect was already handled. this all
seems less than ideal, but it's the best we can do without mandating a
C11 compiler and using the C11 model for atomics.
in the case of pthread_once_t, the ABI type of the underlying object
is not volatile-qualified. so we are assuming that accessing the
object through a volatile-qualified lvalue via casts yields volatile
access semantics. the language of the C standard is somewhat unclear
on this matter, but this is an assumption the linux kernel also makes,
and seems to be the correct interpretation of the standard.
|
|
previously, the __timedwait function was optionally a cancellation
point depending on whether it was passed a pointer to a cleaup
function and context to register. as of now, only one caller actually
used such a cleanup function (and it may face removal soon); most
callers either passed a null pointer to disable cancellation or a
dummy cleanup function.
now, __timedwait is never a cancellation point, and __timedwait_cp is
the cancellable version. this makes the intent of the calling code
more obvious and avoids ugly dummy functions and long argument lists.
|
|
per POSIX, the EINTR condition is an optional error for these
functions, not a mandatory one. since old kernels (pre-2.6.22) failed
to honor SA_RESTART for the futex syscall, it's dangerous to trust
EINTR from the kernel. thankfully POSIX offers an easy way out.
|
|
per POSIX these functions are both cancellation points, so they must
act on any cancellation request which is pending prior to the call.
previously, only the code path where actual waiting took place could
act on cancellation.
|
|
if there is already a waiter for a lock, spinning on the lock is
essentially an attempt to steal it from whichever waiter would obtain
it via any priority rules in place, and is therefore undesirable. in
the current implementation, there is always an inherent race window at
unlock during which a newly-arriving thread may steal the lock from
the existing waiters, but we should aim to keep this window minimal
rather than enlarging it.
|
|
empirically, this increases the maximum rate of wait/post operations
between two threads by 20-150 times on machines I tested, including
x86 and arm. conceptually, it makes sense to do some spinning because
semaphores are intended to be usable as a notification mechanism
between threads, not just as locks, and low-latency notification is a
valuable property to have.
|
|
private-futex uses the virtual address of the futex int directly as
the hash key rather than requiring the kernel to resolve the address
to an underlying backing for the mapping in which it lies. for certain
usage patterns it improves performance significantly.
in many places, the code using futex __wake and __wait operations was
already passing a correct fixed zero or nonzero flag for the priv
argument, so no change was needed at the site of the call, only in the
__wake and __wait functions themselves. in other places, especially
where the process-shared attribute for a synchronization object was
not previously tracked, additional new code is needed. for mutexes,
the only place to store the flag is in the type field, so additional
bit masking logic is needed for accessing the type.
for non-process-shared condition variable broadcasts, the futex
requeue operation is unable to requeue from a private futex to a
process-shared one in the mutex structure, so requeue is simply
disabled in this case by waking all waiters.
for robust mutexes, the kernel always performs a non-private wake when
the owner dies. in order not to introduce a behavioral regression in
non-process-shared robust mutexes (when the owning thread dies), they
are simply forced to be treated as process-shared for now, giving
correct behavior at the expense of performance. this can be fixed by
adding explicit code to pthread_exit to do the right thing for
non-shared robust mutexes in userspace rather than relying on the
kernel to do it, and will be fixed in this way later.
since not all supported kernels have private futex support, the new
code detects EINVAL from the futex syscall and falls back to making
the call without the private flag. no attempt to cache the result is
made; caching it and using the cached value efficiently is somewhat
difficult, and not worth the complexity when the benefits would be
seen only on ancient kernels which have numerous other limitations and
bugs anyway.
|
|
to deal with the fact that the public headers may be used with pre-c99
compilers, __restrict is used in place of restrict, and defined
appropriately for any supported compiler. we also avoid the form
[restrict] since older versions of gcc rejected it due to a bug in the
original c99 standard, and instead use the form *restrict.
|
|
this dec used to be performed by the cancellation handler, which was
called when popped.
|
|
new features:
- FUTEX_WAIT_BITSET op will be used for timed waits if available. this
saves a call to clock_gettime.
- error checking for the timespec struct is now inside __timedwait so
it doesn't need to be duplicated everywhere. cond_timedwait still
needs to duplicate it to avoid unlocking the mutex, though.
- pushing and popping the cancellation handler is delegated to
__timedwait, and cancellable/non-cancellable waits are unified.
|
|
the race condition these changes address is described in glibc bug
report number 12674:
http://sourceware.org/bugzilla/show_bug.cgi?id=12674
up until now, musl has shared the bug, and i had not been able to
figure out how to eliminate it. in short, the problem is that it's not
valid for sem_post to inspect the waiters count after incrementing the
semaphore value, because another thread may have already successfully
returned from sem_wait, (rightly) deemed itself the only remaining
user of the semaphore, and chosen to destroy and free it (or unmap the
shared memory it's stored in). POSIX is not explicit in blessing this
usage, but it gives a very explicit analogous example with mutexes
(which, in musl and glibc, also suffer from the same race condition
bug) in the rationale for pthread_mutex_destroy.
the new semaphore implementation augments the waiter count with a
redundant waiter indication in the semaphore value itself,
representing the presence of "last minute" waiters that may have
arrived after sem_post read the waiter count. this allows sem_post to
read the waiter count prior to incrementing the semaphore value,
rather than after incrementing it, so as to avoid accessing the
semaphore memory whatsoever after the increment takes place.
a similar, but much simpler, fix should be possible for mutexes and
other locking primitives whose usage rules are stricter than
semaphores.
|
|
this patch improves the correctness, simplicity, and size of
cancellation-related code. modulo any small errors, it should now be
completely conformant, safe, and resource-leak free.
the notion of entering and exiting cancellation-point context has been
completely eliminated and replaced with alternative syscall assembly
code for cancellable syscalls. the assembly is responsible for setting
up execution context information (stack pointer and address of the
syscall instruction) which the cancellation signal handler can use to
determine whether the interrupted code was in a cancellable state.
these changes eliminate race conditions in the previous generation of
cancellation handling code (whereby a cancellation request received
just prior to the syscall would not be processed, leaving the syscall
to block, potentially indefinitely), and remedy an issue where
non-cancellable syscalls made from signal handlers became cancellable
if the signal handler interrupted a cancellation point.
x86_64 asm is untested and may need a second try to get it right.
|
|
1. make sem_[timed]wait interruptible by signals, per POSIX
2. keep a waiter count in order to avoid unnecessary futex wake syscalls
|
|
this commit addresses two issues:
1. a race condition, whereby a cancellation request occurring after a
syscall returned from kernelspace but before the subsequent
CANCELPT_END would cause cancellable resource-allocating syscalls
(like open) to leak resources.
2. signal handlers invoked while the thread was blocked at a
cancellation point behaved as if asynchronous cancellation mode wer in
effect, resulting in potentially dangerous state corruption if a
cancellation request occurs.
the glibc/nptl implementation of threads shares both of these issues.
with this commit, both are fixed. however, cancellation points
encountered in a signal handler will not be acted upon if the signal
was received while the thread was already at a cancellation point.
they will of course be acted upon after the signal handler returns, so
in real-world usage where signal handlers quickly return, it should
not be a problem. it's possible to solve this problem too by having
sigaction() wrap all signal handlers with a function that uses a
pthread_cleanup handler to catch cancellation, patch up the saved
context, and return into the cancellable function that will catch and
act upon the cancellation. however that would be a lot of complexity
for minimal if any benefit...
|
|
|
|
|
|
|