summaryrefslogtreecommitdiff
path: root/src/thread
AgeCommit message (Collapse)AuthorFilesLines
2011-07-30fix bug in synccall with no threads: lock was taken but never releasedRich Felker1-4/+4
2011-07-29new attempt at making set*id() safe and robustRich Felker3-118/+113
changing credentials in a multi-threaded program is extremely difficult on linux because it requires synchronizing the change between all threads, which have their own thread-local credentials on the kernel side. this is further complicated by the fact that changing the real uid can fail due to exceeding RLIMIT_NPROC, making it possible that the syscall will succeed in some threads but fail in others. the old __rsyscall approach being replaced was robust in that it would report failure if any one thread failed, but in this case, the program would be left in an inconsistent state where individual threads might have different uid. (this was not as bad as glibc, which would sometimes even fail to report the failure entirely!) the new approach being committed refuses to change real user id when it cannot temporarily set the rlimit to infinity. this is completely POSIX conformant since POSIX does not require an implementation to allow real-user-id changes for non-privileged processes whatsoever. still, setting the real uid can fail due to memory allocation in the kernel, but this can only happen if there is not already a cached object for the target user. thus, we forcibly serialize the syscalls attempts, and fail the entire operation on the first failure. this *should* lead to an all-or-nothing success/failure result, but it's still fragile and highly dependent on kernel developers not breaking things worse than they're already broken. ideally linux will eventually add a CLONE_USERCRED flag that would give POSIX conformant credential changes without any hacks from userspace, and all of this code would become redundant and could be removed ~10 years down the line when everyone has abandoned the old broken kernels. i'm not holding my breath...
2011-06-26fix useless use of potentially-uninitialized mode variable in sem_openRich Felker1-1/+1
2011-06-14restore use of .type in asm, but use modern @function (vs %function)Rich Felker10-0/+11
this seems to be necessary to make the linker accept the functions in a shared library (perhaps to generate PLT entries?) strictly speaking libc-internal asm should not need it. i might clean that up later.
2011-06-14fix race condition in pthread_killRich Felker2-1/+7
if thread id was reused by the kernel between the time pthread_kill read it from the userspace pthread_t object and the time of the tgkill syscall, a signal could be sent to the wrong thread. the tgkill syscall was supposed to prevent this race (versus the old tkill syscall) but it can't; it can only help in the case where the tid is reused in a different process, but not when the tid is reused in the same process. the only solution i can see is an extra lock to prevent threads from exiting while another thread is trying to pthread_kill them. it should be very very cheap in the non-contended case.
2011-06-14run dtors before taking the exit-lock in pthread exitRich Felker1-2/+2
previously a long-running dtor could cause pthread_detach to block.
2011-06-14minor locking optimizationsRich Felker2-2/+2
2011-06-13remove all .size and .type directives for functions from the asmRich Felker10-18/+0
these are useless and have caused problems for users trying to build with non-gnu tools like tcc's assembler.
2011-05-30implement pthread_[sg]etconcurrency.Rich Felker2-0/+15
there is a resource limit of 0 bits to store the concurrency level requested. thus any positive level exceeds a resource limit, resulting in EAGAIN. :-)
2011-05-07optimize out useless default-attribute object in pthread_createRich Felker1-7/+7
2011-05-07optimize compound-literal sigset_t's not to contain useless hurd bitsRich Felker1-2/+2
2011-05-07overhaul implementation-internal signal protectionsRich Felker2-15/+6
the new approach relies on the fact that the only ways to create sigset_t objects without invoking UB are to use the sig*set() functions, or from the masks returned by sigprocmask, sigaction, etc. or in the ucontext_t argument to a signal handler. thus, as long as sigfillset and sigaddset avoid adding the "protected" signals, there is no way the application will ever obtain a sigset_t including these bits, and thus no need to add the overhead of checking/clearing them when sigprocmask or sigaction is called. note that the old code actually *failed* to remove the bits from sa_mask when sigaction was called. the new implementations are also significantly smaller, simpler, and faster due to ignoring the useless "GNU HURD signals" 65-1024, which are not used and, if there's any sanity in the world, never will be used.
2011-05-06reduce some ridiculously large spin countsRich Felker1-1/+1
these should be tweaked according to testing. offhand i know 1000 is too low and 5000 is likely to be sufficiently high. consider trying to add futexes to file locking, too...
2011-05-06remove debug code that was missed in barrier commitRich Felker1-1/+0
2011-05-06completely new barrier implementation, addressing major correctness issuesRich Felker1-16/+44
the previous implementation had at least 2 problems: 1. the case where additional threads reached the barrier before the first wave was finished leaving the barrier was untested and seemed not to be working. 2. threads leaving the barrier continued to access memory within the barrier object after other threads had successfully returned from pthread_barrier_wait. this could lead to memory corruption or crashes if the barrier object had automatic storage in one of the waiting threads and went out of scope before all threads finished returning, or if one thread unmapped the memory in which the barrier object lived. the new implementation avoids both problems by making the barrier state essentially local to the first thread which enters the barrier wait, and forces that thread to be the last to return.
2011-04-22fix initial stack alignment in new threads on x86_64Rich Felker1-1/+1
2011-04-20fix minor bugs due to incorrect threaded-predicate semanticsRich Felker2-5/+3
some functions that should have been testing whether pthread_self() had been called and initialized the thread pointer were instead testing whether pthread_create() had been called and actually made the program "threaded". while it's unlikely any mismatch would occur in real-world problems, this could have introduced subtle bugs. now, we store the address of the main thread's thread descriptor in the libc structure and use its presence as a flag that the thread register is initialized. note that after fork, the calling thread (not necessarily the original main thread) is the new main thread.
2011-04-19move some more code out of pthread_create.cRich Felker2-7/+4
this also de-uglifies the dummy function aliasing a bit.
2011-04-19fix uninitialized waiters field in semaphoresRich Felker1-0/+1
2011-04-18recheck cancellation disabled flag after syscall returns EINTRRich Felker1-1/+1
we already checked before making the syscall, but it's possible that a signal handler interrupted the blocking syscall and disabled cancellation, and that this is the cause of EINTR. in this case, the old behavior was testably wrong.
2011-04-17fix typo in x86_64 cancellable syscall asmRich Felker1-1/+1
2011-04-17pthread_exit is not supposed to affect cancellabilityRich Felker1-2/+0
if the exit was caused by cancellation, __cancel has already set these flags anyway.
2011-04-17fix pthread_exit from cancellation handlerRich Felker1-5/+5
cancellation frames were not correctly popped, so this usage would not only loop, but also reuse discarded and invalid parts of the stack.
2011-04-17clean up handling of thread/nothread mode, lockingRich Felker3-16/+10
2011-04-17debloat: use __syscall instead of syscall where possibleRich Felker2-2/+2
don't waste time (and significant code size due to function call overhead!) setting errno when the result of a syscall does not matter or when it can't fail.
2011-04-17fix bugs in cancellable syscall asmRich Felker3-11/+12
x86_64 was just plain wrong in the cancel-flag-already-set path, and crashing. the more subtle error was not clearing the saved stack pointer before returning to c code. this could result in the signal handler misidentifying c code as the pre-syscall part of the asm, and acting on cancellation at the wrong time, and thus resource leak race conditions. also, now __cancel (in the c code) is responsible for clearing the saved sp in the already-cancelled branch. this means we have to use call rather than jmp to ensure the stack pointer in the c will never match what the asm saved.
2011-04-17optimize cancellation enable/disable codeRich Felker3-4/+10
the goal is to be able to use pthread_setcancelstate internally in the implementation, whenever a function might want to use functions which are cancellation points but avoid becoming a cancellation point itself. i could have just used a separate internal function for temporarily inhibiting cancellation, but the solution in this commit is better because (1) it's one less implementation-specific detail in functions that need to use it, and (2) application code can also get the same benefit. previously, pthread_setcancelstate dependend on pthread_self, which would pull in unwanted thread setup overhead for non-threaded programs. now, it temporarily stores the state in the global libc struct if threads have not been initialized, and later moves it if needed. this way we can instead use __pthread_self, which has no dependencies and assumes that the thread register is already valid.
2011-04-17don't use pthread_once when there is no danger in raceRich Felker1-2/+5
2011-04-17fix some minor issues in cancellation handling patchRich Felker3-11/+19
signals were wrongly left masked, and cancellability state was not switched to disabled, during the execution of cleanup handlers.
2011-04-17overhaul pthread cancellationRich Felker13-59/+182
this patch improves the correctness, simplicity, and size of cancellation-related code. modulo any small errors, it should now be completely conformant, safe, and resource-leak free. the notion of entering and exiting cancellation-point context has been completely eliminated and replaced with alternative syscall assembly code for cancellable syscalls. the assembly is responsible for setting up execution context information (stack pointer and address of the syscall instruction) which the cancellation signal handler can use to determine whether the interrupted code was in a cancellable state. these changes eliminate race conditions in the previous generation of cancellation handling code (whereby a cancellation request received just prior to the syscall would not be processed, leaving the syscall to block, potentially indefinitely), and remedy an issue where non-cancellable syscalls made from signal handlers became cancellable if the signal handler interrupted a cancellation point. x86_64 asm is untested and may need a second try to get it right.
2011-04-14change sem_trywait algorithm so it never has to call __wakeRich Felker1-3/+2
2011-04-14cheap trick to further optimize locking normal mutexesRich Felker2-2/+2
2011-04-14use a separate signal from SIGCANCEL for SIGEV_THREAD timersRich Felker1-2/+0
otherwise we cannot support an application's desire to use asynchronous cancellation within the callback function. this change also slightly debloats pthread_create.c.
2011-04-13simplify cancellation point handlingRich Felker2-16/+5
we take advantage of the fact that unless self->cancelpt is 1, cancellation cannot happen. so just increment it by 2 to temporarily block cancellation. this drops pthread_create.o well under 1k.
2011-04-06fixed crash in new rsyscall (failure to set sa_flags for signal handler)Rich Felker1-0/+2
2011-04-06consistency: change all remaining syscalls to use SYS_ rather than __NR_ prefixRich Felker7-8/+8
2011-04-06move rsyscall out of pthread_create moduleRich Felker2-96/+122
this is something of a tradeoff, as now set*id() functions, rather than pthread_create, are what pull in the code overhead for dealing with linux's refusal to implement proper POSIX thread-vs-process semantics. my motivations are: 1. it's cleaner this way, especially cleaner to optimize out the rsyscall locking overhead from pthread_create when it's not needed. 2. it's expected that only a tiny number of core system programs will ever use set*id() functions, whereas many programs may want to use threads, and making thread overhead tiny is an incentive for "light" programs to try threads.
2011-04-06pthread exit stuff: don't bother setting errno when we won't check it.Rich Felker1-2/+2
2011-04-06fix rsyscall handler: must not clobber errno from signal contextRich Felker1-2/+4
2011-04-06major semaphore improvements (performance and correctness)Rich Felker5-21/+37
1. make sem_[timed]wait interruptible by signals, per POSIX 2. keep a waiter count in order to avoid unnecessary futex wake syscalls
2011-04-05new framework to inhibit thread cancellation when neededRich Felker2-5/+15
with these small changes, libc functions which need to call functions which are cancellation points, but which themselves must not be cancellation points, can use the CANCELPT_INHIBIT and CANCELPT_RESUME macros to temporarily inhibit all cancellation.
2011-04-03pthread_create need not set errnoRich Felker1-1/+1
2011-04-03block all signals during rsyscallRich Felker1-4/+9
otherwise a signal handler could see an inconsistent and nonconformant program state where different threads have different uids/gids.
2011-04-03fix race condition in rsyscall handlerRich Felker1-1/+1
the problem: there is a (single-instruction) race condition window between a thread flagging itself dead and decrementing itself from the thread count. if it receives the rsyscall signal at this exact moment, the rsyscall caller will never succeed in signalling enough flags to succeed, and will deadlock forever. in previous versions of musl, the about-to-terminate thread masked all signals prior to decrementing the thread count, but this cost a whole syscall just to account for extremely rare races. the solution is a huge hack: rather than blocking in the signal handler if the thread is dead, modify the signal mask of the saved context and return in order to prevent further signal handling by the dead thread. this allows the dead thread to continue decrementing the thread count (if it had not yet done so) and exiting, even while the live part of the program blocks for rsyscall.
2011-04-03don't trust siginfo in rsyscall handlerRich Felker1-3/+2
for some inexplicable reason, linux allows the sender of realtime signals to spoof its identity. permission checks for sending signals should limit the impact to same-user processes, but just to be safe, we avoid trusting the siginfo structure and instead simply examine the program state to see if we're in the middle of a legitimate rsyscall.
2011-04-03simplify calling of timer signal handlerRich Felker1-7/+4
2011-04-03simplify pthread tsd key handlingRich Felker2-8/+6
2011-04-03omit pthread tsd dtor code if tsd is not usedRich Felker2-14/+24
2011-04-01simplify setting result on thread cancellationRich Felker1-1/+1
2011-04-01use bss instead of mmap for main thread's pthread thread-specific dataRich Felker2-9/+4
this simplifies code and removes a failure case