Age | Commit message (Collapse) | Author | Files | Lines |
|
|
|
if a multithreaded program became non-multithreaded (i.e. all other
threads exited) while one thread held an internal lock, the remaining
thread would fail to release the lock. the the program then became
multithreaded again at a later time, any further attempts to obtain
the lock would deadlock permanently.
the underlying cause is that the value of libc.threads_minus_1 at
unlock time might not match the value at lock time. one solution would
be returning a flag to the caller indicating whether the lock was
taken and needs to be unlocked, but there is a simpler solution: using
the lock itself as such a flag.
note that this flag is not needed anyway for correctness; if the lock
is not held, the unlock code is harmless. however, the memory
synchronization properties associated with a_store are costly on some
archs, so it's best to avoid executing the unlock code when it is
unnecessary.
|
|
this was resulting in crashes in posix_spawn on mips, and would have
affected applications calling clone too. since the prototype for
__clone has it as a variadic function, it may not assume that 16($sp)
is writable for use in making the syscall. instead, it needs to
allocate additional stack space, and then adjust the stack pointer
back in both of the code paths for the parent process/thread.
|
|
CLONE_PARENT is not necessary (CLONE_THREAD provides all the useful
parts of it) and Linux treats CLONE_PARENT as an error in certain
situations, without noticing that it would be a no-op due to
CLONE_THREAD. this error case prevents, for example, use of a
multi-threaded init process and certain usages with containers.
|
|
|
|
PAGE_SIZE was hardcoded to 4096, which is historically what most
systems use, but on several archs it is a kernel config parameter,
user space can only know it at execution time from the aux vector.
PAGE_SIZE and PAGESIZE are not defined on archs where page size is
a runtime parameter, applications should use sysconf(_SC_PAGE_SIZE)
to query it. Internally libc code defines PAGE_SIZE to libc.page_size,
which is set to aux[AT_PAGESZ] in __init_libc and early in __dynlink
as well. (Note that libc.page_size can be accessed without GOT, ie.
before relocations are done)
Some fpathconf settings are hardcoded to 4096, these should be actually
queried from the filesystem using statfs.
|
|
unlike other archs, the mips version of clone was not doing anything
to align the stack pointer. this seems to have been the cause for some
SIGBUS crashes that were observed in posix_spawn.
|
|
switch to the new __block_all_sigs/__restore_sigs internal API to
clean up the code too.
|
|
this protects against deadlock from spurious signals (e.g. sent by
another process) arriving after the controlling thread releases the
other threads from the sync operation.
|
|
the head pointer was not being reset between calls to synccall, so any
use of this interface more than once would build the linked list
incorrectly, keeping the (now invalid) list nodes from the previous
call.
|
|
the original motivation for this patch was that qemu (and possibly
other syscall emulators) nop out madvise, resulting in an infinite
loop. however, there is another benefit to this change: madvise may
actually undo an explicit madvise the application intended for its
stack, whereas the mremap operation is a true nop. the logic here is
that mremap must fail if it cannot resize the mapping in-place, and
the caller knows that it cannot resize in-place because it knows the
next page of virtual memory is already occupied.
|
|
this change is to get the right tags for C++ ABI matching. it should
have no other effects.
|
|
the address of the pointer to the sched param, rather than the
pointer, was being passed to the kernel.
|
|
|
|
|
|
fstat should not fail under normal circumstances, so this fix is
mostly theoretical.
|
|
the address of the pointer, rather than the pointer, was being passed.
this was probably a copy-and-paste error from corresponding get code.
|
|
apparently these features have been in Linux for a while now, so it
makes sense to support them. the bit twiddling seems utterly illogical
and wasteful, especially the negation, but that's how the kernel folks
chose to encode pids/tids into the clock id.
|
|
|
|
there are several reasons for this change. one is getting rid of the
repetition of the syscall signature all over the place. another is
sharing the constant masks without costly GOT accesses in PIC.
the main motivation, however, is accurately representing whether we
want to block signals that might be handled by the application, or all
signals.
|
|
|
|
they have already blocked signals before decrementing the thread
count, so the code being removed is unreachable in the case where the
thread is no longer counted.
|
|
this was simply a case of saving the state in the wrong place.
|
|
the previous few commits ended up leaving the thread count and signal
mask wrong for atexit handlers and stdio cleanup.
|
|
now that blocking signals prevents any application code from running
while the last thread is exiting, the cas logic is no longer needed to
prevent decrementing below zero.
|
|
|
|
the thread count (1+libc.threads_minus_1) must always be greater than
or equal to the number of threads which could have application code
running, even in an async-signal-safe sense. there is at least one
dangerous race condition if this invariant fails to hold: dlopen could
allocate too little TLS for existing threads, and a signal handler
running in the exiting thread could claim the allocated TLS for itself
(via __tls_get_addr), leaving too little for the other threads it was
allocated for and thereby causing out-of-bounds access.
there may be other situations where it's dangerous for the thread
count to be too low, particularly in the case where only one thread
should be left, in which case locking may be omitted. however, all
such code paths seem to arise from undefined behavior, since
async-signal-unsafe functions are not permitted to be called from a
signal handler that interrupts pthread_exit (which is itself
async-signal-unsafe).
this change may also simplify logic in __synccall and improve the
chances of making __synccall async-signal-safe.
|
|
|
|
this function is mainly (purely?) for obtaining stack address
information, but we also provide the detach state since it's easy to
do anyway.
|
|
the issue at hand is that many syscalls require as an argument the
kernel-ABI size of sigset_t, intended to allow the kernel to switch to
a larger sigset_t in the future. previously, each arch was defining
this size in syscall_arch.h, which was redundant with the definition
of _NSIG in bits/signal.h. as it's used in some not-quite-portable
application code as well, _NSIG is much more likely to be recognized
and understood immediately by someone reading the code, and it's also
shorter and less cluttered.
note that _NSIG is actually 65/129, not 64/128, but the division takes
care of throwing away the off-by-one part.
|
|
this bug seems to have been around a long time.
|
|
this bug was introduced when support for application-provided stacks
was originally added.
|
|
the main goal of these changes is to address the case where an
application provides a stack of size N, but TLS has size M that's a
significant portion of the size N (or even larger than N), thus giving
the application less stack space than it expected or no stack at all!
the new strategy pthread_create now uses is to only put TLS on the
application-provided stack if TLS is smaller than 1/8 of the stack
size or 2k, whichever is smaller. this ensures that the application
always has "close enough" to what it requested, and the threshold is
chosen heuristically to make sure "sane" amounts of TLS still end up
in the application-provided stack.
if TLS does not fit the above criteria, pthread_create uses mmap to
obtain space for TLS, but still uses the application-provided stack
for actual call frame stack. this is to avoid wasting memory, and for
the sake of supporting ugly hacks like garbage collection based on
assumptions that the implementation will use the provided stack range.
in order for the above heuristics to ever succeed, the amount of TLS
space wasted on POSIX TSD (pthread_key_create based) needed to be
reduced. otherwise, these changes would preclude any use of
pthread_create without mmap, which would have serious memory usage and
performance costs for applications trying to create huge numbers of
threads using pre-allocated stack space. the new value of
PTHREAD_KEYS_MAX is the minimum allowed by POSIX, 128. this should
still be plenty more than real-world applications need, especially now
that C11/gcc-style TLS is now supported in musl, and most apps and
libraries choose to use that instead of POSIX TSD when available.
at the same time, PTHREAD_STACK_MIN has been decreased. it was
originally set to PAGE_SIZE back when there was no support for TLS or
application-provided stacks, and requests smaller than a whole page
did not make sense. now, there are two good reasons to support
requests smaller than a page: (1) applications could provide
pre-allocated stacks smaller than a page, and (2) with smaller stack
sizes, stack+TLS+TSD can all fit in one page, making it possible for
applications which need huge numbers of threads with minimal stack
needs to allocate exactly one page per thread. the new value of
PTHREAD_STACK_MIN, 2k, is aligned with the minimum size for
sigaltstack.
|
|
this should generate faster and smaller code, especially with inline
syscalls. the conditional with cnt is ugly, but thankfully cnt is
always a constant anyway so it gets evaluated at compile time. it may
be preferable to make separate __wake and __wakeall macros without a
count argument.
priv flag is not used yet; private futex support still needs to be
done at some point in the future.
|
|
these should have little/no practical impact but they're needed for
strict conformance.
|
|
sigsetjmp: store temporaries in jmp_buf rather than on stack.
|
|
it's essential to decrement the stack pointer before writing to new
stack space, rather than afterwards. otherwise there is a race
condition during which asynchronous code (signals) could clobber the
data being stored.
it may be possible to optimize the code further using stwu, but I
wanted to avoid making any changes to the actual stack layout in this
commit. further improvements can be made separately if desired.
|
|
priority inheritance is not yet supported, and priority protection
probably will not be supported ever unless there's serious demand for
it (it's a fairly heavy-weight feature).
per-thread cpu clocks would be nice to have, but to my knowledge linux
is still not capable of supporting them. glibc fakes them by using the
_process_ cpu-time clock and subtracting the thread creation time,
which gives seriously incorrect semantics (worse than not supporting
the feature at all), so until there's a way to do it right, it will
remain as a stub that always fails.
|
|
|
|
|
|
|
|
|
|
POSIX includes mostly-useless attribute-get functions for each
attribute-set function, presumably out of some object-oriented
dogmatism. the get functions are not useful with the simple idiomatic
usage of attributes. there are of course possible valid uses of them
(like writing wrappers for pthread init functions that perform special
actions on the presence of certain attributes), but considering how
tiny these functions are anyway, little is lost by putting them all in
one file, and some build-time cost and archive-file-size benefits are
achieved.
|
|
linux's sched_* syscalls actually implement the TPS (thread
scheduling) functionality, not the PS (process scheduling)
functionality which the sched_* functions are supposed to have.
omitting support for the PS option (and having the sched_* interfaces
fail with ENOSYS rather than omitting them, since some broken software
assumes they exist) seems to be the only conforming way to do this on
linux.
|
|
this mirrors the stdio_impl.h cleanup. one header which is not
strictly needed, errno.h, is left in pthread_impl.h, because since
pthread functions return their error codes rather than using errno,
nearly every single pthread function needs the errno constants.
in a few places, rather than bringing in string.h to use memset, the
memset was replaced by direct assignment. this seems to generate much
better code anyway, and makes many functions which were previously
non-leaf functions into leaf functions (possibly eliminating a great
deal of bloat on some platforms where non-leaf functions require ugly
prologue and/or epilogue).
|
|
with this commit, based on testing with patches to qemu which are not
yet upstream,
|
|
since it did not set the return-value register, the caller could
wrongly interpret this as failure.
|
|
only @PLT relocations are considered functions for purposes of
-Bsymbolic-functions, so always use @PLT. it should not hurt in the
static-linked case.
|
|
despite documentation that makes it sound a lot different, the only
ABI-constraint difference between TLS variants II and I seems to be
that variant II stores the initial TLS segment immediately below the
thread pointer (i.e. the thread pointer points to the end of it) and
variant I stores the initial TLS segment above the thread pointer,
requiring the thread descriptor to be stored below. the actual value
stored in the thread pointer register also tends to have per-arch
random offsets applied to it for silly micro-optimization purposes.
with these changes applied, TLS should be basically working on all
supported archs except microblaze. I'm still working on getting the
necessary information and a working toolchain that can build TLS
binaries for microblaze, but in theory, static-linked programs with
TLS and dynamic-linked programs where only the main executable uses
TLS should already work on microblaze.
alignment constraints have not yet been heavily tested, so it's
possible that this code does not always align TLS segments correctly
on archs that need TLS variant I.
|
|
|