Age | Commit message (Collapse) | Author | Files | Lines |
|
on some archs, linux support for futex operations (including
robust_list processing) that depend on kernelspace CAS is conditional
on a runtime check. as of linux 4.18, this check fails unconditionally
on nommu archs that perform it, and spurious failure on powerpc64 was
observed but not explained. it's also possible that futex support is
omitted entirely, or that the kernel is older than 2.6.17. for most
futex ops, ENOSYS does not yield hard breakage; userspace will just
spin at 100% cpu load. but for robust mutexes, correct behavior
depends on the kernel functionality.
use the get_robust_list syscall to probe for support at the first call
to pthread_mutexattr_setrobust, and block creation of robust mutexes
with a reportable error if they can't be supported.
|
|
the robust list head lies in the thread structure, which is unmapped
before exit for detached threads. this leaves the kernel unable to
process the exiting thread's robust list, and with a dangling pointer
which may happen to point to new unrelated data at the time the kernel
processes it.
userspace processing of the robust list was already needed for
non-pshared robust mutexes in order to perform private futex wakes
rather than the shared ones the kernel would do, but it was
conditional on linking pthread_mutexattr_setrobust and did not bother
processing the pshared mutexes in the list, which requires additional
logic for the robust list pending slot in case pthread_exit is
interrupted by asynchronous process termination.
the new robust list processing code is linked unconditionally (inlined
in pthread_exit), handles both private and shared mutexes, and also
removes the kernel's reference to the robust list before unmapping and
exit if the exiting thread is detached.
|
|
when manipulating the robust list, the order of stores matters,
because the code may be asynchronously interrupted by a fatal signal
and the kernel will then access the robust list in what is essentially
an async-signal context.
previously, aliasing considerations made it seem unlikely that a
compiler could reorder the stores, but proving that they could not be
reordered incorrectly would have been extremely difficult. instead
I've opted to make all the pointers used as part of the robust list,
including those in the robust list head and in the individual mutexes,
volatile.
in addition, the format of the robust list has been changed to point
back to the head at the end, rather than ending with a null pointer.
this is to match the documented kernel robust list ABI. the null
pointer, which was previously used, only worked because faults during
access terminate the robust list processing.
|
|
the kernel always uses non-private wake when walking the robust list
when a thread or process exits, so it's not able to wake waiters
listening with the private futex flag. this problem is solved by doing
the equivalent in userspace as the last step of pthread_exit.
care is taken to remove mutexes from the robust list before unlocking
them so that the kernel will not attempt to access them again,
possibly after another thread locks them. this removal code can treat
the list as singly-linked, since no further code which would add or
remove items is able to run at this point. moreover, the pending
pointer is not needed since the mutexes being unlocked are all
process-local; in the case of asynchronous process termination, they
all cease to exist.
since a process-local robust mutex cannot come into existence without
a call to pthread_mutexattr_setrobust in the same process, the code
for userspace robust list processing is put in that source file, and
a weak alias to a dummy function is used to avoid pulling in this
bloat as part of pthread_exit in static-linked programs.
|
|
this change is to get the right tags for C++ ABI matching. it should
have no other effects.
|
|
some of this code should be cleaned up, e.g. using macros for some of
the bit flags, masks, etc. nonetheless, the code is believed to be
working and correct at this point.
|