summaryrefslogtreecommitdiff
path: root/src/thread
AgeCommit message (Collapse)AuthorFilesLines
2014-08-25refrain from spinning on locks when there is already a waiterRich Felker5-5/+5
if there is already a waiter for a lock, spinning on the lock is essentially an attempt to steal it from whichever waiter would obtain it via any priority rules in place, and is therefore undesirable. in the current implementation, there is always an inherent race window at unlock during which a newly-arriving thread may steal the lock from the existing waiters, but we should aim to keep this window minimal rather than enlarging it.
2014-08-25spin before waiting on futex in mutex and rwlock lock operationsRich Felker3-0/+20
2014-08-25spin in sem_[timed]wait before performing futex waitRich Felker1-0/+5
empirically, this increases the maximum rate of wait/post operations between two threads by 20-150 times on machines I tested, including x86 and arm. conceptually, it makes sense to do some spinning because semaphores are intended to be usable as a notification mechanism between threads, not just as locks, and low-latency notification is a valuable property to have.
2014-08-25sanitize number of spins in userspace before futex waitRich Felker2-2/+2
the previous spin limit of 10000 was utterly unreasonable. empirically, it could consume up to 200000 cycles, whereas a failed futex wait (EAGAIN) typically takes 1000 cycles or less, and even a true wait/wake round seems much less expensive. the new counts (100 for general wait, 200 in barrier) were simply chosen to be in the range of what's reasonable without having adverse effects on casual micro-benchmark tests I have been running. they may still be too high, from a standpoint of not wasting cpu cycles, but at least they're a lot better than before. rigorous testing across different archs and cpu models should be performed at some point to determine whether further adjustments should be made.
2014-08-23fix false ownership of stdio FILEs due to tid reuseRich Felker1-0/+2
this is analogous commit fffc5cda10e0c5c910b40f7be0d4fa4e15bb3f48 which fixed the corresponding issue for mutexes. the robust list can't be used here because the locks do not share a common layout with mutexes. at some point it may make sense to simply incorporate a mutex object into the FILE structure and use it, but that would be a much more invasive change, and it doesn't mesh well with the current design that uses a simpler code path for internal locking and pulls in the recursive-mutex-like code when the flockfile API is used explicitly.
2014-08-22fix fallback checks for kernels without private futex supportRich Felker4-4/+4
for unknown syscall commands, the kernel produces ENOSYS, not EINVAL.
2014-08-22fix use of uninitialized memory with application-provided thread stacksRich Felker1-0/+2
the subsequent code in pthread_create and the code which copies TLS initialization images to the new thread's TLS space assume that the memory provided to them is zero-initialized, which is true when it's obtained by pthread_create using mmap. however, when the caller provides a stack using pthread_attr_setstack, pthread_create cannot make any assumptions about the contents. simply zero-filling the relevant memory in this case is the simplest and safest fix.
2014-08-18further simplify and optimize new cond varRich Felker1-29/+21
the main idea of the changes made is to have waiters wait directly on the "barrier" lock that was used to prevent them from making forward progress too early rather than first waiting on the atomic state value and then attempting to lock the barrier. in addition, adjustments to the mutex waiter count are optimized. previously, each waking waiter decremented the count (unless it was the first) then immediately incremented it again for the next waiter (unless it was the last). this was a roundabout was of achieving the equivalent of incrementing it once for the first waiter and decrementing it once for the last.
2014-08-18simplify and improve new cond var implementationRich Felker1-40/+22
previously, wake order could be unpredictable: if a waiter happened to leave its futex wait on the state early, e.g. due to EAGAIN while restarting after a signal handler, it could acquire the mutex out of turn. handling this required ugly O(n) list walking in the unwait function and accounting to remove waiters that already woke from the list. with the new changes, the "barrier" locks in each waiter node are only unlocked in turn. in addition to simplifying the code, this seems to improve performance slightly, probably by reducing the number of accesses threads make to each other's stacks. as an additional benefit, unrecoverable mutex re-locking errors (mainly ENOTRECOVERABLE for robust mutexes) no longer need to be handled with deadlock; they can be reported to the caller, since the unlocking sequence makes it unnecessary to rely on the mutex to synchronize access to the waiter list.
2014-08-17redesign cond var implementation to fix multiple issuesRich Felker5-88/+209
the immediate issue that was reported by Jens Gustedt and needed to be fixed was corruption of the cv/mutex waiter states when switching to using a new mutex with the cv after all waiters were unblocked but before they finished returning from the wait function. self-synchronized destruction was also handled poorly and may have had race conditions. and the use of sequence numbers for waking waiters admitted a theoretical missed-wakeup if the sequence number wrapped through the full 32-bit space. the new implementation is largely documented in the comments in the source. the basic principle is to use linked lists initially attached to the cv object, but detachable on signal/broadcast, made up of nodes residing in automatic storage (stack) on the threads that are waiting. this eliminates the need for waiters to access the cv object after they are signaled, and allows us to limit wakeup to one waiter at a time during broadcasts even when futex requeue cannot be used. performance is also greatly improved, roughly double some tests. basically nothing is changed in the process-shared cond var case, where this implementation does not work, since processes do not have access to one another's local storage.
2014-08-17fix possible failure-to-wake deadlock with robust mutexesRich Felker1-1/+4
when the kernel is responsible for waking waiters on a robust mutex whose owner died, it does not have a waiters count available and must rely entirely on the waiter bit of the lock value. normally, this bit is only set by newly arriving waiters, so it will be clear if no new waiters arrived after the current owner obtained the lock, even if there are other waiters present. leaving it clear is desirable because it allows timed-lock operations to remove themselves as waiters and avoid causing unnecessary futex wake syscalls. however, for process-shared robust mutexes, we need to set the bit whenever there are existing waiters so that the kernel will know to wake them. for non-process-shared robust mutexes, the wake happens in userspace and can look at the waiters count, so the bit does not need to be set in the non-process-shared case.
2014-08-17make pointers used in robust list volatileRich Felker3-9/+16
when manipulating the robust list, the order of stores matters, because the code may be asynchronously interrupted by a fatal signal and the kernel will then access the robust list in what is essentially an async-signal context. previously, aliasing considerations made it seem unlikely that a compiler could reorder the stores, but proving that they could not be reordered incorrectly would have been extremely difficult. instead I've opted to make all the pointers used as part of the robust list, including those in the robust list head and in the individual mutexes, volatile. in addition, the format of the robust list has been changed to point back to the head at the end, rather than ending with a null pointer. this is to match the documented kernel robust list ABI. the null pointer, which was previously used, only worked because faults during access terminate the robust list processing.
2014-08-16fix robust mutex unrecoverable status, and related clean-upRich Felker3-12/+4
a robust mutex should not enter the unrecoverable status until it's unlocked without marking it consistent. previously, flag 8 in the type was used as an indication of unrecoverable, but only honored after successful locking; this resulted in a race window where the unrecoverable mutex could appear to a second thread as locked/busy again while the first thread was in the process of observing it as unrecoverable. now, flag 8 is used to mean that the mutex is in the process of being recovered, but not yet marked consistent. the flag only takes effect in pthread_mutex_unlock, where it causes the value 0x40000000 (owner dead flag, with old owner tid 0, an otherwise impossible state) to be stored in the lock. subsequent lock attempts will interpret this state as unrecoverable.
2014-08-16fix false ownership of mutexes due to tid reuse, using robust listRich Felker4-23/+26
per the resolution of Austin Group issue 755, the POSIX requirement that ownership be enforced for recursive and error-checking mutexes does not allow a random new thread to acquire ownership of an orphaned mutex just because it happened to be assigned the same tid as the original owner that exited with the mutex locked. one possible fix for this issue would be to disallow the kernel thread to terminate when it exited with mutexes held, permanently reserving the tid against reuse. however, this does not solve the problem for process-shared mutexes where lifetime cannot be controlled, so it was not used. the alternate approach I've taken is to reuse the robust mutex system for non-robust recursive and error-checking mutexes. when a thread exits, the kernel (or the new userspace robust-list code added in commit b092f1c5fa9c048e12d002c7b972df5ecbe96d1d) will set the owner-died bit for these orphaned mutexes, but since the mutex-type is not robust, pthread_mutex_trylock will not allow a new owner to acquire them. instead, they remain in a state of being permanently locked, as desired.
2014-08-16enable private futex for process-local robust mutexesRich Felker3-1/+25
the kernel always uses non-private wake when walking the robust list when a thread or process exits, so it's not able to wake waiters listening with the private futex flag. this problem is solved by doing the equivalent in userspace as the last step of pthread_exit. care is taken to remove mutexes from the robust list before unlocking them so that the kernel will not attempt to access them again, possibly after another thread locks them. this removal code can treat the list as singly-linked, since no further code which would add or remove items is able to run at this point. moreover, the pending pointer is not needed since the mutexes being unlocked are all process-local; in the case of asynchronous process termination, they all cease to exist. since a process-local robust mutex cannot come into existence without a call to pthread_mutexattr_setrobust in the same process, the code for userspace robust list processing is put in that source file, and a weak alias to a dummy function is used to avoid pulling in this bloat as part of pthread_exit in static-linked programs.
2014-08-15make futex operations use private-futex mode when possibleRich Felker22-64/+74
private-futex uses the virtual address of the futex int directly as the hash key rather than requiring the kernel to resolve the address to an underlying backing for the mapping in which it lies. for certain usage patterns it improves performance significantly. in many places, the code using futex __wake and __wait operations was already passing a correct fixed zero or nonzero flag for the priv argument, so no change was needed at the site of the call, only in the __wake and __wait functions themselves. in other places, especially where the process-shared attribute for a synchronization object was not previously tracked, additional new code is needed. for mutexes, the only place to store the flag is in the type field, so additional bit masking logic is needed for accessing the type. for non-process-shared condition variable broadcasts, the futex requeue operation is unable to requeue from a private futex to a process-shared one in the mutex structure, so requeue is simply disabled in this case by waking all waiters. for robust mutexes, the kernel always performs a non-private wake when the owner dies. in order not to introduce a behavioral regression in non-process-shared robust mutexes (when the owning thread dies), they are simply forced to be treated as process-shared for now, giving correct behavior at the expense of performance. this can be fixed by adding explicit code to pthread_exit to do the right thing for non-shared robust mutexes in userspace rather than relying on the kernel to do it, and will be fixed in this way later. since not all supported kernels have private futex support, the new code detects EINVAL from the futex syscall and falls back to making the call without the private flag. no attempt to cache the result is made; caching it and using the cached value efficiently is somewhat difficult, and not worth the complexity when the benefits would be seen only on ancient kernels which have numerous other limitations and bugs anyway.
2014-07-18add or1k (OpenRISC 1000) architecture portStefan Kristiansson4-0/+64
With the exception of a fenv implementation, the port is fully featured. The port has been tested in or1ksim, the golden reference functional simulator for OpenRISC 1000. It passes all libc-test tests (except the math tests that requires a fenv implementation). The port assumes an or1k implementation that has support for atomic instructions (l.lwa/l.swa). Although it passes all the libc-test tests, the port is still in an experimental state, and has yet experienced very little 'real-world' use.
2014-07-16work around constant folding bug 61144 in gcc 4.9.0 and 4.9.1Rich Felker2-4/+4
previously we detected this bug in configure and issued advice for a workaround, but this turned out not to work. since then gcc 4.9.0 has appeared in several distributions, and now 4.9.1 has been released without a fix despite this being a wrong code generation bug which is supposed to be a release-blocker, per gcc policy. since the scope of the bug seems to affect only data objects (rather than functions) whose definitions are overridable, and there are only a very small number of these in musl, I am just changing them from const to volatile for the time being. simply removing the const would be sufficient to make gcc 4.9.1 work (the non-const case was inadvertently fixed as part of another change in gcc), and this would also be sufficient with 4.9.0 if we forced -O0 on the affected files or on the whole build. however it's cleaner to just remove all the broken compiler detection and use volatile, which will ensure that they are never constant-folded. the quality of a non-broken compiler's output should not be affected except for the fact that these objects are no longer const and thus possibly add a few bytes to data/bss. this change can be reconsidered and possibly reverted at some point in the future when the broken gcc versions are no longer relevant.
2014-07-06rename file containing pthread_cleanup_push and pop for consistencyRich Felker1-0/+0
2014-07-06rework cancellation weak alias logic not to depend on archive orderRich Felker3-6/+12
if the order of object files in the static archive libc.a was not respected by the linker, the old logic could wrongly cause POSIX symbols outside of the ISO C namespace to be pulled into pure C programs. this should not happen with well-behaved linkers, but relying on the link order was a bad idea anyway. files are renamed to better reflect their contents now that they don't need names to control their order as members in the archive file.
2014-07-05eliminate use of cached pid from thread structureRich Felker4-8/+5
the main motivation for this change is to remove the assumption that the tid of the main thread is also the pid of the process. (the value returned by the set_tid_address syscall was used to fill both fields despite it semantically being the tid.) this is historically and presently true on linux and unlikely to change, but it conceivably could be false on other systems that otherwise reproduce the linux syscall api/abi. only a few parts of the code were actually still using the cached pid. in a couple places (aio and synccall) it was a minor optimization to avoid a syscall. caching could be reintroduced, but lazily as part of the public getpid function rather than at program startup, if it's deemed important for performance later. in other places (cancellation and pthread_kill) the pid was completely unnecessary; the tkill syscall can be used instead of tgkill. this is actually a rather subtle issue, since tgkill is supposedly a solution to race conditions that can affect use of tkill. however, as documented in the commit message for commit 7779dbd2663269b465951189b4f43e70839bc073, tgkill does not actually solve this race; it just limits it to happening within one process rather than between processes. we use a lock that avoids the race in pthread_kill, and the use in the cancellation signal handler is self-targeted and thus not subject to tid reuse races, so both are safe regardless of which syscall (tgkill or tkill) is used.
2014-07-02add locale frameworkRich Felker1-0/+7
this commit adds non-stub implementations of setlocale, duplocale, newlocale, and uselocale, along with the data structures and minimal code needed for representing the active locale on a per-thread basis and optimizing the common case where thread-local locale settings are not in use. at this point, the data structures only contain what is necessary to represent LC_CTYPE (a single flag) and LC_MESSAGES (a name for use in finding message translation files). representation for the other categories will be added later; the expectation is that a single pointer will suffice for each. for LC_CTYPE, the strings "C" and "POSIX" are treated as special; any other string is accepted and treated as "C.UTF-8". for other categories, any string is accepted after being truncated to a maximum supported length (currently 15 bytes). for LC_MESSAGES, the name is kept regardless of whether libc itself can use such a message translation locale, since applications using catgets or gettext should be able to use message locales libc is not aware of. for other categories, names which are not successfully loaded as locales (which, at present, means all names) are treated as aliases for "C". setlocale never fails. locale settings are not yet used anywhere, so this commit should have no visible effects except for the contents of the string returned by setlocale.
2014-06-19separate __tls_get_addr implementation from dynamic linker/init_tlsRich Felker1-0/+17
such separation serves multiple purposes: - by having the common path for __tls_get_addr alone in its own function with a tail call to the slow case, code generation is greatly improved. - by having __tls_get_addr in it own file, it can be replaced on a per-arch basis as needed, for optimization or ABI-specific purposes. - by removing __tls_get_addr from __init_tls.c, a few bytes of code are shaved off of static binaries (which are unlikely to use this function unless the linker messed up).
2014-06-19optimize i386 ___tls_get_addr asmRich Felker1-1/+8
2014-06-10simplify errno implementationRich Felker1-1/+0
the motivation for the errno_ptr field in the thread structure, which this commit removes, was to allow the main thread's errno to keep its address when lazy thread pointer initialization was used. &errno was evaluated prior to setting up the thread pointer and stored in errno_ptr for the main thread; subsequently created threads would have errno_ptr pointing to their own errno_val in the thread structure. since lazy initialization was removed, there is no need for this extra level of indirection; __errno_location can simply return the address of the thread's errno_val directly. this does cause &errno to change, but the change happens before entry to application code, and thus is not observable.
2014-06-10replace all remaining internal uses of pthread_self with __pthread_selfRich Felker9-10/+10
prior to version 1.1.0, the difference between pthread_self (the public function) and __pthread_self (the internal macro or inline function) was that the former would lazily initialize the thread pointer if it was not already initialized, whereas the latter would crash in this case. since lazy initialization is no longer supported, use of pthread_self no longer makes sense; it simply generates larger, slower code.
2014-06-10add thread-pointer support for pre-2.6 kernels on i386Rich Felker1-4/+18
such kernels cannot support threads, but the thread pointer is also important for other purposes, most notably stack protector. without a valid thread pointer, all code compiled with stack protector will crash. the same applies to any use of thread-local storage by applications or libraries. the concept of this patch is to fall back to using the modify_ldt syscall, which has been around since linux 1.0, to setup the gs segment register. since the kernel does not have a way to automatically assign ldt entries, use of slot zero is hard-coded. if this fallback path is used, __set_thread_area returns a positive value (rather than the usual zero for success, or negative for error) indicating to the caller that the thread pointer was successfully set, but only for the main thread, and that thread creation will not work properly. the code in __init_tp has been changed accordingly to record this result for later use by pthread_create.
2014-04-15fix deadlock race in pthread_onceRich Felker1-2/+1
at the end of successful pthread_once, there was a race window during which another thread calling pthread_once would momentarily change the state back from 2 (finished) to 1 (in-progress). in this case, the status was immediately changed back, but with no wake call, meaning that waiters which arrived during this short window could block forever. there are two possible fixes. one would be adding the wake to the code path where it was missing. but it's better just to avoid reverting the status at all, by using compare-and-swap instead of swap.
2014-03-24fix pointer type mismatch and misplacement of constRich Felker1-2/+2
2014-03-24always initialize thread pointer at program startRich Felker5-52/+23
this is the first step in an overhaul aimed at greatly simplifying and optimizing everything dealing with thread-local state. previously, the thread pointer was initialized lazily on first access, or at program startup if stack protector was in use, or at certain random places where inconsistent state could be reached if it were not initialized early. while believed to be fully correct, the logic was fragile and non-obvious. in the first phase of the thread pointer overhaul, support is retained (and in some cases improved) for systems/situation where loading the thread pointer fails, e.g. old kernels. some notes on specific changes: - the confusing use of libc.main_thread as an indicator that the thread pointer is initialized is eliminated in favor of an explicit has_thread_pointer predicate. - sigaction no longer needs to ensure that the thread pointer is initialized before installing a signal handler (this was needed to prevent a situation where the signal handler caused the thread pointer to be initialized and the subsequent sigreturn cleared it again) but it still needs to ensure that implementation-internal thread-related signals are not blocked. - pthread tsd initialization for the main thread is deferred in a new manner to minimize bloat in the static-linked __init_tp code. - pthread_setcancelstate no longer needs special handling for the situation before the thread pointer is initialized. it simply fails on systems that cannot support a thread pointer, which are non-conforming anyway. - pthread_cleanup_push/pop now check for missing thread pointer and nop themselves out in this case, so stdio no longer needs to avoid the cancellable path when the thread pointer is not available. a number of cases remain where certain interfaces may crash if the system does not support a thread pointer. at this point, these should be limited to pthread interfaces, and the number of such cases should be fewer than before.
2014-02-27rename superh port to "sh" for consistencyRich Felker4-0/+0
linux, gcc, etc. all use "sh" as the name for the superh arch. there was already some inconsistency internally in musl: the dynamic linker was searching for "ld-musl-sh.path" as its path file despite its own name being "ld-musl-superh.so.1". there was some sentiment in both directions as to how to resolve the inconsistency, but overall "sh" was favored.
2014-02-23superh portBobby Bingham4-0/+113
2014-02-23mostly-cosmetic fixups to x32 port mergeRich Felker2-6/+9
2014-02-23x32 port (diff against vanilla x86_64)rofl0r4-10/+8
2014-02-23import vanilla x86_64 code as x32rofl0r4-0/+70
2014-02-22use syscall_arg_t type for syscall prototypes in pthread coderofl0r2-3/+8
2014-02-09clone: make clone a wrapper around __cloneBobby Bingham5-18/+3
The architecture-specific assembly versions of clone did not set errno on failure, which is inconsistent with glibc. __clone still returns the error via its return value, and clone is now a wrapper that sets errno as needed. The public clone has also been moved to src/linux, as it's not directly related to the pthreads API. __clone is called by pthread_create, which does not report errors via errno. Though not strictly necessary, it's nice to avoid clobbering errno here.
2014-01-06eliminate explicit (long) casts when making syscallsRich Felker1-1/+1
this practice came from very early, before internal/syscall.h defined macros that could accept pointer arguments directly and handle them correctly. aside from being ugly and unnecessary, it looks like it will be problematic when we add support for 32-bit ABIs on archs where registers (and syscall arguments) are 64-bit, e.g. x32 and mips n32.
2013-12-12include cleanups: remove unused headers and add feature test macrosSzabolcs Nagy3-3/+0
2013-10-04fix invalid implicit pointer conversion in pthread_key_createRich Felker1-1/+1
2013-09-20fix potential deadlock bug in libc-internal locking logicRich Felker1-3/+6
if a multithreaded program became non-multithreaded (i.e. all other threads exited) while one thread held an internal lock, the remaining thread would fail to release the lock. the the program then became multithreaded again at a later time, any further attempts to obtain the lock would deadlock permanently. the underlying cause is that the value of libc.threads_minus_1 at unlock time might not match the value at lock time. one solution would be returning a flag to the caller indicating whether the lock was taken and needs to be unlocked, but there is a simpler solution: using the lock itself as such a flag. note that this flag is not needed anyway for correctness; if the lock is not held, the unlock code is harmless. however, the memory synchronization properties associated with a_store are costly on some archs, so it's best to avoid executing the unlock code when it is unnecessary.
2013-09-16fix clobbering of caller's stack in mips __clone functionRich Felker1-0/+3
this was resulting in crashes in posix_spawn on mips, and would have affected applications calling clone too. since the prototype for __clone has it as a variadic function, it may not assume that 16($sp) is writable for use in making the syscall. instead, it needs to allocate additional stack space, and then adjust the stack pointer back in both of the code paths for the parent process/thread.
2013-09-16omit CLONE_PARENT flag to clone in pthread_createRich Felker1-1/+1
CLONE_PARENT is not necessary (CLONE_THREAD provides all the useful parts of it) and Linux treats CLONE_PARENT as an error in certain situations, without noticing that it would be a no-op due to CLONE_THREAD. this error case prevents, for example, use of a multi-threaded init process and certain usages with containers.
2013-09-16use symbolic names for clone flags in pthread_createRich Felker1-2/+5
2013-09-15support configurable page size on mips, powerpc and microblazeSzabolcs Nagy2-0/+2
PAGE_SIZE was hardcoded to 4096, which is historically what most systems use, but on several archs it is a kernel config parameter, user space can only know it at execution time from the aux vector. PAGE_SIZE and PAGESIZE are not defined on archs where page size is a runtime parameter, applications should use sysconf(_SC_PAGE_SIZE) to query it. Internally libc code defines PAGE_SIZE to libc.page_size, which is set to aux[AT_PAGESZ] in __init_libc and early in __dynlink as well. (Note that libc.page_size can be accessed without GOT, ie. before relocations are done) Some fpathconf settings are hardcoded to 4096, these should be actually queried from the filesystem using statfs.
2013-09-14fix child stack alignment on mips cloneRich Felker1-0/+1
unlike other archs, the mips version of clone was not doing anything to align the stack pointer. this seems to have been the cause for some SIGBUS crashes that were observed in posix_spawn.
2013-09-02fix mips-specific bug in synccall (too little space for signal mask)Rich Felker1-5/+3
switch to the new __block_all_sigs/__restore_sigs internal API to clean up the code too.
2013-09-02in synccall, ignore the signal before any threads' signal handlers returnRich Felker1-4/+4
this protects against deadlock from spurious signals (e.g. sent by another process) arriving after the controlling thread releases the other threads from the sync operation.
2013-09-02fix invalid pointer in synccall (multithread setuid, etc.)Rich Felker1-0/+1
the head pointer was not being reset between calls to synccall, so any use of this interface more than once would build the linked list incorrectly, keeping the (now invalid) list nodes from the previous call.
2013-07-31in pthread_getattr_np, use mremap rather than madvise to measure stackRich Felker1-1/+2
the original motivation for this patch was that qemu (and possibly other syscall emulators) nop out madvise, resulting in an infinite loop. however, there is another benefit to this change: madvise may actually undo an explicit madvise the application intended for its stack, whereas the mremap operation is a true nop. the logic here is that mremap must fail if it cannot resize the mapping in-place, and the caller knows that it cannot resize in-place because it knows the next page of virtual memory is already occupied.