Age | Commit message (Collapse) | Author | Files | Lines |
|
calls to __aeabi_read_tp may be generated by the compiler to access
TLS on pre-v6 targets. previously, this function was hard-coded to
call the kuser helper, which would crash on kernels with kuser helper
removed.
to fix the problem most efficiently, the definition of __aeabi_read_tp
is moved so that it's an alias for the new __a_gettp. however, on v7+
targets, code to initialize the runtime choice of thread-pointer
loading code is not even compiled, meaning that defining
__aeabi_read_tp would have caused an immediate crash due to using the
default implementation of __a_gettp with a HCF instruction.
fortunately there is an elegant solution which reduces overall code
size: putting the native thread-pointer loading instruction in the
default code path for __a_gettp, so that separate default/native code
paths are not needed. this function should never be called before
__set_thread_area anyway, and if it is called early on pre-v6
hardware, the old behavior (crashing) is maintained.
ideally __aeabi_read_tp would not be called at all on v7+ targets
anyway -- in fact, prior to the overhaul, the same problem existed,
but it was never caught by users building for v7+ with kuser disabled.
however, it's possible for calls to __aeabi_read_tp to end up in a v7+
binary if some of the object files were built for pre-v7 targets, e.g.
in the case of static libraries that were built separately, so this
case needs to be handled.
|
|
previously, builds for pre-armv6 targets hard-coded use of the "kuser
helper" system for atomics and thread-pointer access, resulting in
binaries that fail to run (crash) on systems where this functionality
has been disabled (as a security/hardening measure) in the kernel.
additionally, builds for armv6 hard-coded an outdated/deprecated
memory barrier instruction which may require emulation (extremely
slow) on future models.
this overhaul replaces the behavior for all pre-armv7 builds (both of
the above cases) to perform runtime detection of the appropriate
mechanisms for barrier, atomic compare-and-swap, and thread pointer
access. detection is based on information provided by the kernel in
auxv: presence of the HWCAP_TLS bit for AT_HWCAP and the architecture
version encoded in AT_PLATFORM. direct use of the instructions is
preferred when possible, since probing for the existence of the kuser
helper page would be difficult and would incur runtime cost.
for builds targeting armv7 or later, the runtime detection code is not
compiled at all, and much more efficient versions of the non-cas
atomic operations are provided by using ldrex/strex directly rather
than wrapping cas.
|
|
this change is a workaround for the inability of current compilers to
perform "shrink wrapping" optimizations. in casual testing, it roughly
doubled the performance of pthread_once when called on an
already-finished once control object.
|
|
|
|
these functions need to be fast when the init routine has already run,
since they may be called very often from code which depends on global
initialization having taken place. as such, a fast path bypassing
atomic cas on the once control object was used to avoid heavy memory
contention. however, on archs with weakly ordered memory, the fast
path failed to ensure that the caller actually observes the side
effects of the init routine.
preliminary performance testing showed that simply removing the fast
path was not practical; a performance drop of roughly 85x was observed
with 20 threads hammering the same once control on a 24-core machine.
so the new explicit barrier operation from atomic.h is used to retain
the fast path while ensuring memory visibility.
performance may be reduced on some archs where the barrier actually
makes a difference, but the previous behavior was unsafe and incorrect
on these archs. future improvements to the implementation of a_barrier
should reduce the impact.
|
|
based on patch by Jens Gustedt.
the main difficulty here is handling the difference between start
function signatures and thread return types for C11 threads versus
POSIX threads. pointers to void are assumed to be able to represent
faithfully all values of int. the function pointer for the thread
start function is cast to an incorrect type for passing through
pthread_create, but is cast back to its correct type before calling so
that the behavior of the call is well-defined.
changes to the existing threads implementation were kept minimal to
reduce the risk of regressions, and duplication of code that carries
implementation-specific assumptions was avoided for ease and safety of
future maintenance.
|
|
Because of the clear separation for private pthread_cond_t these
interfaces are quite simple and direct.
|
|
|
|
These all have POSIX equivalents, but aside from tss_get, they all
have minor changes to the signature or return value and thus need to
exist as separate functions.
|
|
The intent of this is to avoid name space pollution of the C threads
implementation.
This has two sides to it. First we have to provide symbols that wouldn't
pollute the name space for the C threads implementation. Second we have
to clean up some internal uses of POSIX functions such that they don't
implicitly drag in such symbols.
|
|
per POSIX these functions are both cancellation points, so they must
act on any cancellation request which is pending prior to the call.
previously, only the code path where actual waiting took place could
act on cancellation.
|
|
if there is already a waiter for a lock, spinning on the lock is
essentially an attempt to steal it from whichever waiter would obtain
it via any priority rules in place, and is therefore undesirable. in
the current implementation, there is always an inherent race window at
unlock during which a newly-arriving thread may steal the lock from
the existing waiters, but we should aim to keep this window minimal
rather than enlarging it.
|
|
|
|
empirically, this increases the maximum rate of wait/post operations
between two threads by 20-150 times on machines I tested, including
x86 and arm. conceptually, it makes sense to do some spinning because
semaphores are intended to be usable as a notification mechanism
between threads, not just as locks, and low-latency notification is a
valuable property to have.
|
|
the previous spin limit of 10000 was utterly unreasonable.
empirically, it could consume up to 200000 cycles, whereas a failed
futex wait (EAGAIN) typically takes 1000 cycles or less, and even a
true wait/wake round seems much less expensive.
the new counts (100 for general wait, 200 in barrier) were simply
chosen to be in the range of what's reasonable without having adverse
effects on casual micro-benchmark tests I have been running. they may
still be too high, from a standpoint of not wasting cpu cycles, but at
least they're a lot better than before. rigorous testing across
different archs and cpu models should be performed at some point to
determine whether further adjustments should be made.
|
|
this is analogous commit fffc5cda10e0c5c910b40f7be0d4fa4e15bb3f48
which fixed the corresponding issue for mutexes.
the robust list can't be used here because the locks do not share a
common layout with mutexes. at some point it may make sense to simply
incorporate a mutex object into the FILE structure and use it, but
that would be a much more invasive change, and it doesn't mesh well
with the current design that uses a simpler code path for internal
locking and pulls in the recursive-mutex-like code when the flockfile
API is used explicitly.
|
|
for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
|
|
the subsequent code in pthread_create and the code which copies TLS
initialization images to the new thread's TLS space assume that the
memory provided to them is zero-initialized, which is true when it's
obtained by pthread_create using mmap. however, when the caller
provides a stack using pthread_attr_setstack, pthread_create cannot
make any assumptions about the contents. simply zero-filling the
relevant memory in this case is the simplest and safest fix.
|
|
the main idea of the changes made is to have waiters wait directly on
the "barrier" lock that was used to prevent them from making forward
progress too early rather than first waiting on the atomic state value
and then attempting to lock the barrier.
in addition, adjustments to the mutex waiter count are optimized.
previously, each waking waiter decremented the count (unless it was
the first) then immediately incremented it again for the next waiter
(unless it was the last). this was a roundabout was of achieving the
equivalent of incrementing it once for the first waiter and
decrementing it once for the last.
|
|
previously, wake order could be unpredictable: if a waiter happened to
leave its futex wait on the state early, e.g. due to EAGAIN while
restarting after a signal handler, it could acquire the mutex out of
turn. handling this required ugly O(n) list walking in the unwait
function and accounting to remove waiters that already woke from the
list.
with the new changes, the "barrier" locks in each waiter node are only
unlocked in turn. in addition to simplifying the code, this seems to
improve performance slightly, probably by reducing the number of
accesses threads make to each other's stacks.
as an additional benefit, unrecoverable mutex re-locking errors
(mainly ENOTRECOVERABLE for robust mutexes) no longer need to be
handled with deadlock; they can be reported to the caller, since the
unlocking sequence makes it unnecessary to rely on the mutex to
synchronize access to the waiter list.
|
|
the immediate issue that was reported by Jens Gustedt and needed to be
fixed was corruption of the cv/mutex waiter states when switching to
using a new mutex with the cv after all waiters were unblocked but
before they finished returning from the wait function.
self-synchronized destruction was also handled poorly and may have had
race conditions. and the use of sequence numbers for waking waiters
admitted a theoretical missed-wakeup if the sequence number wrapped
through the full 32-bit space.
the new implementation is largely documented in the comments in the
source. the basic principle is to use linked lists initially attached
to the cv object, but detachable on signal/broadcast, made up of nodes
residing in automatic storage (stack) on the threads that are waiting.
this eliminates the need for waiters to access the cv object after
they are signaled, and allows us to limit wakeup to one waiter at a
time during broadcasts even when futex requeue cannot be used.
performance is also greatly improved, roughly double some tests.
basically nothing is changed in the process-shared cond var case,
where this implementation does not work, since processes do not have
access to one another's local storage.
|
|
when the kernel is responsible for waking waiters on a robust mutex
whose owner died, it does not have a waiters count available and must
rely entirely on the waiter bit of the lock value.
normally, this bit is only set by newly arriving waiters, so it will
be clear if no new waiters arrived after the current owner obtained
the lock, even if there are other waiters present. leaving it clear is
desirable because it allows timed-lock operations to remove themselves
as waiters and avoid causing unnecessary futex wake syscalls. however,
for process-shared robust mutexes, we need to set the bit whenever
there are existing waiters so that the kernel will know to wake them.
for non-process-shared robust mutexes, the wake happens in userspace
and can look at the waiters count, so the bit does not need to be set
in the non-process-shared case.
|
|
when manipulating the robust list, the order of stores matters,
because the code may be asynchronously interrupted by a fatal signal
and the kernel will then access the robust list in what is essentially
an async-signal context.
previously, aliasing considerations made it seem unlikely that a
compiler could reorder the stores, but proving that they could not be
reordered incorrectly would have been extremely difficult. instead
I've opted to make all the pointers used as part of the robust list,
including those in the robust list head and in the individual mutexes,
volatile.
in addition, the format of the robust list has been changed to point
back to the head at the end, rather than ending with a null pointer.
this is to match the documented kernel robust list ABI. the null
pointer, which was previously used, only worked because faults during
access terminate the robust list processing.
|
|
a robust mutex should not enter the unrecoverable status until it's
unlocked without marking it consistent. previously, flag 8 in the type
was used as an indication of unrecoverable, but only honored after
successful locking; this resulted in a race window where the
unrecoverable mutex could appear to a second thread as locked/busy
again while the first thread was in the process of observing it as
unrecoverable.
now, flag 8 is used to mean that the mutex is in the process of being
recovered, but not yet marked consistent. the flag only takes effect
in pthread_mutex_unlock, where it causes the value 0x40000000 (owner
dead flag, with old owner tid 0, an otherwise impossible state) to be
stored in the lock. subsequent lock attempts will interpret this state
as unrecoverable.
|
|
per the resolution of Austin Group issue 755, the POSIX requirement
that ownership be enforced for recursive and error-checking mutexes
does not allow a random new thread to acquire ownership of an orphaned
mutex just because it happened to be assigned the same tid as the
original owner that exited with the mutex locked.
one possible fix for this issue would be to disallow the kernel thread
to terminate when it exited with mutexes held, permanently reserving
the tid against reuse. however, this does not solve the problem for
process-shared mutexes where lifetime cannot be controlled, so it was
not used.
the alternate approach I've taken is to reuse the robust mutex system
for non-robust recursive and error-checking mutexes. when a thread
exits, the kernel (or the new userspace robust-list code added in
commit b092f1c5fa9c048e12d002c7b972df5ecbe96d1d) will set the
owner-died bit for these orphaned mutexes, but since the mutex-type is
not robust, pthread_mutex_trylock will not allow a new owner to
acquire them. instead, they remain in a state of being permanently
locked, as desired.
|
|
the kernel always uses non-private wake when walking the robust list
when a thread or process exits, so it's not able to wake waiters
listening with the private futex flag. this problem is solved by doing
the equivalent in userspace as the last step of pthread_exit.
care is taken to remove mutexes from the robust list before unlocking
them so that the kernel will not attempt to access them again,
possibly after another thread locks them. this removal code can treat
the list as singly-linked, since no further code which would add or
remove items is able to run at this point. moreover, the pending
pointer is not needed since the mutexes being unlocked are all
process-local; in the case of asynchronous process termination, they
all cease to exist.
since a process-local robust mutex cannot come into existence without
a call to pthread_mutexattr_setrobust in the same process, the code
for userspace robust list processing is put in that source file, and
a weak alias to a dummy function is used to avoid pulling in this
bloat as part of pthread_exit in static-linked programs.
|
|
private-futex uses the virtual address of the futex int directly as
the hash key rather than requiring the kernel to resolve the address
to an underlying backing for the mapping in which it lies. for certain
usage patterns it improves performance significantly.
in many places, the code using futex __wake and __wait operations was
already passing a correct fixed zero or nonzero flag for the priv
argument, so no change was needed at the site of the call, only in the
__wake and __wait functions themselves. in other places, especially
where the process-shared attribute for a synchronization object was
not previously tracked, additional new code is needed. for mutexes,
the only place to store the flag is in the type field, so additional
bit masking logic is needed for accessing the type.
for non-process-shared condition variable broadcasts, the futex
requeue operation is unable to requeue from a private futex to a
process-shared one in the mutex structure, so requeue is simply
disabled in this case by waking all waiters.
for robust mutexes, the kernel always performs a non-private wake when
the owner dies. in order not to introduce a behavioral regression in
non-process-shared robust mutexes (when the owning thread dies), they
are simply forced to be treated as process-shared for now, giving
correct behavior at the expense of performance. this can be fixed by
adding explicit code to pthread_exit to do the right thing for
non-shared robust mutexes in userspace rather than relying on the
kernel to do it, and will be fixed in this way later.
since not all supported kernels have private futex support, the new
code detects EINVAL from the futex syscall and falls back to making
the call without the private flag. no attempt to cache the result is
made; caching it and using the cached value efficiently is somewhat
difficult, and not worth the complexity when the benefits would be
seen only on ancient kernels which have numerous other limitations and
bugs anyway.
|
|
With the exception of a fenv implementation, the port is fully featured.
The port has been tested in or1ksim, the golden reference functional
simulator for OpenRISC 1000.
It passes all libc-test tests (except the math tests that
requires a fenv implementation).
The port assumes an or1k implementation that has support for
atomic instructions (l.lwa/l.swa).
Although it passes all the libc-test tests, the port is still
in an experimental state, and has yet experienced very little
'real-world' use.
|
|
previously we detected this bug in configure and issued advice for a
workaround, but this turned out not to work. since then gcc 4.9.0 has
appeared in several distributions, and now 4.9.1 has been released
without a fix despite this being a wrong code generation bug which is
supposed to be a release-blocker, per gcc policy.
since the scope of the bug seems to affect only data objects (rather
than functions) whose definitions are overridable, and there are only
a very small number of these in musl, I am just changing them from
const to volatile for the time being. simply removing the const would
be sufficient to make gcc 4.9.1 work (the non-const case was
inadvertently fixed as part of another change in gcc), and this would
also be sufficient with 4.9.0 if we forced -O0 on the affected files
or on the whole build. however it's cleaner to just remove all the
broken compiler detection and use volatile, which will ensure that
they are never constant-folded. the quality of a non-broken compiler's
output should not be affected except for the fact that these objects
are no longer const and thus possibly add a few bytes to data/bss.
this change can be reconsidered and possibly reverted at some point in
the future when the broken gcc versions are no longer relevant.
|
|
|
|
if the order of object files in the static archive libc.a was not
respected by the linker, the old logic could wrongly cause POSIX
symbols outside of the ISO C namespace to be pulled into pure C
programs. this should not happen with well-behaved linkers, but
relying on the link order was a bad idea anyway.
files are renamed to better reflect their contents now that they don't
need names to control their order as members in the archive file.
|
|
the main motivation for this change is to remove the assumption that
the tid of the main thread is also the pid of the process. (the value
returned by the set_tid_address syscall was used to fill both fields
despite it semantically being the tid.) this is historically and
presently true on linux and unlikely to change, but it conceivably
could be false on other systems that otherwise reproduce the linux
syscall api/abi.
only a few parts of the code were actually still using the cached pid.
in a couple places (aio and synccall) it was a minor optimization to
avoid a syscall. caching could be reintroduced, but lazily as part of
the public getpid function rather than at program startup, if it's
deemed important for performance later. in other places (cancellation
and pthread_kill) the pid was completely unnecessary; the tkill
syscall can be used instead of tgkill. this is actually a rather
subtle issue, since tgkill is supposedly a solution to race conditions
that can affect use of tkill. however, as documented in the commit
message for commit 7779dbd2663269b465951189b4f43e70839bc073, tgkill
does not actually solve this race; it just limits it to happening
within one process rather than between processes. we use a lock that
avoids the race in pthread_kill, and the use in the cancellation
signal handler is self-targeted and thus not subject to tid reuse
races, so both are safe regardless of which syscall (tgkill or tkill)
is used.
|
|
this commit adds non-stub implementations of setlocale, duplocale,
newlocale, and uselocale, along with the data structures and minimal
code needed for representing the active locale on a per-thread basis
and optimizing the common case where thread-local locale settings are
not in use.
at this point, the data structures only contain what is necessary to
represent LC_CTYPE (a single flag) and LC_MESSAGES (a name for use in
finding message translation files). representation for the other
categories will be added later; the expectation is that a single
pointer will suffice for each.
for LC_CTYPE, the strings "C" and "POSIX" are treated as special; any
other string is accepted and treated as "C.UTF-8". for other
categories, any string is accepted after being truncated to a maximum
supported length (currently 15 bytes). for LC_MESSAGES, the name is
kept regardless of whether libc itself can use such a message
translation locale, since applications using catgets or gettext should
be able to use message locales libc is not aware of. for other
categories, names which are not successfully loaded as locales (which,
at present, means all names) are treated as aliases for "C". setlocale
never fails.
locale settings are not yet used anywhere, so this commit should have
no visible effects except for the contents of the string returned by
setlocale.
|
|
such separation serves multiple purposes:
- by having the common path for __tls_get_addr alone in its own
function with a tail call to the slow case, code generation is
greatly improved.
- by having __tls_get_addr in it own file, it can be replaced on a
per-arch basis as needed, for optimization or ABI-specific purposes.
- by removing __tls_get_addr from __init_tls.c, a few bytes of code
are shaved off of static binaries (which are unlikely to use this
function unless the linker messed up).
|
|
|
|
the motivation for the errno_ptr field in the thread structure, which
this commit removes, was to allow the main thread's errno to keep its
address when lazy thread pointer initialization was used. &errno was
evaluated prior to setting up the thread pointer and stored in
errno_ptr for the main thread; subsequently created threads would have
errno_ptr pointing to their own errno_val in the thread structure.
since lazy initialization was removed, there is no need for this extra
level of indirection; __errno_location can simply return the address
of the thread's errno_val directly. this does cause &errno to change,
but the change happens before entry to application code, and thus is
not observable.
|
|
prior to version 1.1.0, the difference between pthread_self (the
public function) and __pthread_self (the internal macro or inline
function) was that the former would lazily initialize the thread
pointer if it was not already initialized, whereas the latter would
crash in this case. since lazy initialization is no longer supported,
use of pthread_self no longer makes sense; it simply generates larger,
slower code.
|
|
such kernels cannot support threads, but the thread pointer is also
important for other purposes, most notably stack protector. without a
valid thread pointer, all code compiled with stack protector will
crash. the same applies to any use of thread-local storage by
applications or libraries.
the concept of this patch is to fall back to using the modify_ldt
syscall, which has been around since linux 1.0, to setup the gs
segment register. since the kernel does not have a way to
automatically assign ldt entries, use of slot zero is hard-coded. if
this fallback path is used, __set_thread_area returns a positive value
(rather than the usual zero for success, or negative for error)
indicating to the caller that the thread pointer was successfully set,
but only for the main thread, and that thread creation will not work
properly. the code in __init_tp has been changed accordingly to record
this result for later use by pthread_create.
|
|
at the end of successful pthread_once, there was a race window during
which another thread calling pthread_once would momentarily change the
state back from 2 (finished) to 1 (in-progress). in this case, the
status was immediately changed back, but with no wake call, meaning
that waiters which arrived during this short window could block
forever. there are two possible fixes. one would be adding the wake to
the code path where it was missing. but it's better just to avoid
reverting the status at all, by using compare-and-swap instead of
swap.
|
|
|
|
this is the first step in an overhaul aimed at greatly simplifying and
optimizing everything dealing with thread-local state.
previously, the thread pointer was initialized lazily on first access,
or at program startup if stack protector was in use, or at certain
random places where inconsistent state could be reached if it were not
initialized early. while believed to be fully correct, the logic was
fragile and non-obvious.
in the first phase of the thread pointer overhaul, support is retained
(and in some cases improved) for systems/situation where loading the
thread pointer fails, e.g. old kernels.
some notes on specific changes:
- the confusing use of libc.main_thread as an indicator that the
thread pointer is initialized is eliminated in favor of an explicit
has_thread_pointer predicate.
- sigaction no longer needs to ensure that the thread pointer is
initialized before installing a signal handler (this was needed to
prevent a situation where the signal handler caused the thread
pointer to be initialized and the subsequent sigreturn cleared it
again) but it still needs to ensure that implementation-internal
thread-related signals are not blocked.
- pthread tsd initialization for the main thread is deferred in a new
manner to minimize bloat in the static-linked __init_tp code.
- pthread_setcancelstate no longer needs special handling for the
situation before the thread pointer is initialized. it simply fails
on systems that cannot support a thread pointer, which are
non-conforming anyway.
- pthread_cleanup_push/pop now check for missing thread pointer and
nop themselves out in this case, so stdio no longer needs to avoid
the cancellable path when the thread pointer is not available.
a number of cases remain where certain interfaces may crash if the
system does not support a thread pointer. at this point, these should
be limited to pthread interfaces, and the number of such cases should
be fewer than before.
|
|
linux, gcc, etc. all use "sh" as the name for the superh arch. there
was already some inconsistency internally in musl: the dynamic linker
was searching for "ld-musl-sh.path" as its path file despite its own
name being "ld-musl-superh.so.1". there was some sentiment in both
directions as to how to resolve the inconsistency, but overall "sh"
was favored.
|
|
|
|
|
|
|
|
|
|
|
|
The architecture-specific assembly versions of clone did not set errno on
failure, which is inconsistent with glibc. __clone still returns the error
via its return value, and clone is now a wrapper that sets errno as needed.
The public clone has also been moved to src/linux, as it's not directly
related to the pthreads API.
__clone is called by pthread_create, which does not report errors via
errno. Though not strictly necessary, it's nice to avoid clobbering errno
here.
|
|
this practice came from very early, before internal/syscall.h defined
macros that could accept pointer arguments directly and handle them
correctly. aside from being ugly and unnecessary, it looks like it
will be problematic when we add support for 32-bit ABIs on archs where
registers (and syscall arguments) are 64-bit, e.g. x32 and mips n32.
|
|
|