Age | Commit message (Collapse) | Author | Files | Lines |
|
these exist for the sake of defining the corresponding weak public
aliases (for C11 and POSIX namespace conformance reasons). they are
not referenced by anything else in libc, so make them static.
|
|
policy is that all public functions which have a public declaration
should be defined in a context where that public declaration is
visible, to avoid preventable type mismatches.
an audit performed using GCC's -Wmissing-declarations turned up the
violations corrected here. in some cases the public header had not
been included; in others, a feature test macro needed to make the
declaration visible had been omitted.
in the case of gethostent and getnetent, the omission seems to have
been intentional, as a hack to admit a single stub definition for both
functions. this kind of hack is no longer acceptable; it's UB and
would not fly with LTO or advanced toolchains. the hack is undone to
make exposure of the declarations possible.
|
|
this cleans up what had become widespread direct inline use of "GNU C"
style attributes directly in the source, and lowers the barrier to
increased use of hidden visibility, which will be useful to recovering
some of the efficiency lost when the protected visibility hack was
dropped in commit dc2f368e565c37728b0d620380b849c3a1ddd78f, especially
on archs where the PLT ABI is costly.
|
|
__pthread_mutex_timedlock is used to implement c11 mutex functions,
and therefore cannot call pthread_mutex_trylock by name.
|
|
compiler cannot cache immutable fields of the mutex object across
external calls it can't see, much less across atomics.
|
|
avoid gratuitously setting up and tearing down the robust list pending
slot.
|
|
if __cp_cancel was reached via __syscall_cp, r12 will necessarily
still contain a GOT pointer (for libc.so or for the static-linked main
program) valid for entering __cancel. however, in the case of async
cancellation, r12 may contain any scratch value; it's not necessarily
even a valid GOT pointer for the code that was interrupted.
unlike in commit 0ec49dab6794166d67fae4764ce7fdea42ea6103 where the
corresponding issue was fixed for powerpc64, there is fundamentally no
way for fdpic code to recompute its GOT pointer. so a new mechanism is
introduced for cancel_handler to write a GOT register value into the
interrupted context on archs where it is needed.
|
|
entering the local entry point for __cancel from __cp_cancel is valid
if __cp_cancel was reached from __syscall_cp, since both are in libc
and share the same TOC pointer, but it is not valid if __cp_cancel was
reached when cancel_handler rewrote the program counter for
asynchronous cancellation of code outside libc.
to ensure __cancel is entered with a valid TOC pointer, recompute the
correct value in a PC-relative manner before jumping.
|
|
this is a POSIX requirement.
|
|
__aeabi_read_tp used to call c code, but that was incorrect as the
arm runtime abi specifies special pcs for this function: it is only
allowed to clobber r0, ip, lr and cpsr.
maintainer's note: the old code explicitly saved and restored all
general-purpose registers which are call-clobbered in the normal
calling convention, so it's unlikely that any real-world compilers
produced code that could break. however theoretically they could have
chosen to use floating point registers, in which case the caller's
values of those registers would be clobbered.
|
|
with async cancellation enabled, pthread_cancel(pthread_self())
deadlocked due to pthread_kill holding killlock which is needed by
pthread_exit.
this could be solved by making pthread_kill block signals around the
critical section, at least when the target thread is itself, but the
issue only arises for cancellation, and otherwise would just be
imposing unnecessary cost.
instead just have pthread_cancel explicitly check for async
self-cancellation and call pthread_exit(PTHREAD_CANCELED) directly
rather than going through the signal machinery.
|
|
commit 610c5a8524c3d6cd3ac5a5f1231422e7648a3791 changed the thread
pointer setup so tp points at the end of the pthread struct on arm,
but failed to update __aeabi_read_tp so it was off by 8.
this broke tls access in code that is compiled with -mtp=soft, which
is the default when target arch is pre armv6k or thumb1.
maintainer's note: no release versions are affected.
|
|
Call SYS_exit on return from fn in __clone. This is the expected
behavior of this function. Without this the child task will crash on
return from fn, since it will return to nowhere.
|
|
due to moved code, commit b8742f32602add243ee2ce74d804015463726899
inadvertently used the return value of __clone, rather than the return
value of SYS_sched_setscheduler in the new thread, to check whether it
needed to report failure. since a successful __clone returns the tid
of the new thread, which is never zero, this caused pthread_create
always to return with an invalid error number in the code path for
PTHREAD_EXPLICIT_SCHED.
this regression was not present in any releases.
|
|
this fixes a major gap in the intended functionality of
pthread_setattr_default_np. if application/library code creating a
thread does not pass a null attribute pointer to pthread_create, but
sets up an attribute object to change other properties while leaving
the stack alone, the created thread will get a stack with size
DEFAULT_STACK_SIZE. this makes pthread_setattr_default_np useless for
working around stack overflow issues in such applications, and leaves
a major risk of regression if previously-working code switches from
using a null attribute pointer to an attribute object.
this change aligns the behavior more closely with the glibc
pthread_setattr_default_np functionality too, albeit via a different
mechanism. glibc encodes "default" specially in the attribute object
and reads the actual default at thread creation time. with this
commit, we now copy the current default into the attribute object at
pthread_attr_init time, so that applications that query the properties
of the attribute object will see the right values.
|
|
three ABIs are supported: the default with 68881 80-bit fpu format and
results returned in floating point registers, softfloat-only with the
same format, and coldfire fpu with IEEE single/double only. only the
first is tested at all, and only under qemu which has fpu emulation
bugs.
basic functionality smoke tests have been performed for the most
common arch-specific breakage via libc-test and qemu user-level
emulation. some sysvipc failures remain, but are shared with other big
endian archs and will be fixed separately.
|
|
the wrapper start function that performs scheduling operations is
unreachable if pthread_attr_setinheritsched is never called, so move
it there rather than the pthread_create source file, saving some code
size for static-linked programs.
|
|
eliminate the awkward startlock mechanism and corresponding fields of
the pthread structure that were only used at startup.
instead of having pthread_create perform the scheduling operations and
having the new thread wait for them to be completed, start the new
thread with a wrapper start function that performs its own scheduling,
sending the result code back via a futex. this way the new thread can
use storage from the calling thread's stack rather than permanent
fields in the pthread structure.
|
|
previously, some accesses to the detached state (from pthread_join and
pthread_getattr_np) were unsynchronized; they were harmless in
programs with well-defined behavior, but ugly. other accesses (in
pthread_exit and pthread_detach) were synchronized by a poorly named
"exitlock", with an ad-hoc trylock operation on it open-coded in
pthread_detach, whose only purpose was establishing protocol for which
thread is responsible for deallocation of detached-thread resources.
instead, use an atomic detach_state and unify it with the futex used
to wait for thread exit. this eliminates 2 members from the pthread
structure, gets rid of the hackish lock usage, and makes rigorous the
trap added in commit 80bf5952551c002cf12d96deb145629765272db0 for
catching attempts to join detached threads. it should also make
attempt to detach an already-detached thread reliably trap.
|
|
if the last thread exited via pthread_exit, the logic that marked it
dead did not account for the possibility of it targeting itself via
atexit handlers. for example, an atexit handler calling
pthread_kill(pthread_self(), SIGKILL) would return success
(previously, ESRCH) rather than causing termination via the signal.
move the release of killlock after the determination is made whether
the exiting thread is the last thread. in the case where it's not,
move the release all the way to the end of the function. this way we
can clear the tid rather than spending storage on a dedicated
dead-flag. clearing the tid is also preferable in that it hardens
against inadvertent use of the value after the thread has terminated
but before it is joined.
|
|
posix documents in the rationale and future directions for
pthread_kill that, since the lifetime of the thread id for a joinable
thread lasts until it is joined, ESRCH is not a correct error for
pthread_kill to produce when the target thread has exited but not yet
been joined, and that conforming applications cannot attempt to detect
this state. future versions of the standard may explicitly require
that ESRCH not be returned for this case.
|
|
the tid field in the pthread structure is not volatile, and really
shouldn't be, so as not to limit the compiler's ability to reorder,
merge, or split loads in code paths that may be relevant to
performance (like controlling lock ownership).
however, use of objects which are not volatile or atomic with futex
wait is inherently broken, since the compiler is free to transform a
single load into multiple loads, thereby using a different value for
the controlling expression of the loop and the value passed to the
futex syscall, leading the syscall to block instead of returning.
reportedly glibc's pthread_join was actually affected by an equivalent
issue in glibc on s390.
add a separate, dedicated join_futex object for pthread_join to use.
|
|
|
|
In all cases this is just a change from two volatile int to one.
|
|
In some places there has been a direct usage of the functions. Use the
macros consistently everywhere, such that it might be easier later on to
capture the fast path directly inside the macro and only have the call
overhead on the slow path.
|
|
A variant of this new lock algorithm has been presented at SAC'16, see
https://hal.inria.fr/hal-01304108. A full version of that paper is
available at https://hal.inria.fr/hal-01236734.
The main motivation of this is to improve on the safety of the basic lock
implementation in musl. This is achieved by squeezing a lock flag and a
congestion count (= threads inside the critical section) into a single
int. Thereby an unlock operation does exactly one memory
transfer (a_fetch_add) and never touches the value again, but still
detects if a waiter has to be woken up.
This is a fix of a use-after-free bug in pthread_detach that had
temporarily been patched. Therefore this patch also reverts
c1e27367a9b26b9baac0f37a12349fc36567c8b6
This is also the only place where internal knowledge of the lock
algorithm is used.
The main price for the improved safety is a little bit larger code.
Under high congestion, the scheduling behavior will be different
compared to the previous algorithm. In that case, a successful
put-to-sleep may appear out of order compared to the arrival in the
critical section.
|
|
calling __unlock on t->exitlock is not valid because __unlock reads
the waiters count after making the atomic store that could allow
pthread_exit to continue and unmap the thread's stack and the object t
points to. for now, inline the __unlock logic with an unconditional
futex wake operation so that the waiters count is not needed.
once __lock/__unlock have been made safe for self-synchronized
destruction, we could switch back to using them.
|
|
if the parent thread was able to set the new thread's priority before
it reached the check for 'startlock', the new thread failed to restore
its signal mask and thus ran with all signals blocked.
concept for patch by Sergei, who reported the issue; unnecessary
changes were removed and comments added since the whole 'startlock'
thing is non-idiomatic and confusing. eventually it should be replaced
with use of idiomatic synchronization primitives.
|
|
passing to pthread_join the id of a thread which is not joinable
results in undefined behavior.
in principle the check to trap does not necessarily work if
pthread_detach was called after thread creation, since no effort is
made here to synchronize access to t->detached, but the check is
well-defined and harmless for callers which did not invoke UB, and
likely to help catch erroneous code that would otherwise mysteriously
hang.
patch by William Pitcock.
|
|
The flag 1<<7 is used in several places for different purposes that are
not always easy to distinguish. Mark those usages that correspond to the
flag that is used by the kernel for futexes.
|
|
when using the sh4a opcodes, the assembler tags the resulting object
file as requiring sh4a. the linker then refuses to (static) link it
with object files marked as requiring j2, since there is no isa level
that includes both sh4a and j2 instructions.
|
|
binutils commit bada43421274615d0d5f629a61a60b7daa71bc15 tightened
immediate fixup handling in gas in such a way that the final .arch of
an object file must be compatible with the fixups used when the
instruction was assembled; this in turn broke assembling of atomics.s,
at least in thumb mode.
it's not clear whether this should be considered a bug in gas, but
.object_arch is preferable anyway for our purpose here of controlling
the ISA level tag on the object file being produced, and it's the
intended directive for use in object files with runtime code
selection. research by Szabolcs Nagy confirmed that .object_arch is
supported in all relevant versions of binutils and clang's integrated
assembler.
patch by Reiner Herrmann.
|
|
commit 78a8ef47c4d92b7680c52a85f80a81e29da86bb9 inadvertently removed
the SA_RESTART flag from the sigaction for the internal signal handler
used by __synccall for broadcasting. as a result, programs which did
not use interrupting signals but which used set*id() in a
multithreaded context could wrongly observe EINTR errors they're not
prepared to handle.
|
|
x32 has another gratuitous difference to all other archs:
it passes an array of 64bit values to __tls_get_addr().
usually it is an array of size_t.
|
|
three problems are addressed:
- use of pc arithmetic, which was difficult if not impossible to make
correct in thumb mode on all models, so that relative rather than
absolute pointers to the backends could be used. this was designed
back when there was no coherent model for the early stages of the
dynamic linker before relocations, and is no longer necessary.
- assumption that data (the relative pointers to the backends) can be
accessed at a constant displacement from the code. this will not be
possible on future fdpic subarchs (for cortex-m), so move
responsibility for loading the backend code address to the caller.
- hard-coded arm opcodes using the .word directive. instead, use the
.arch directive to work around the assembler's refusal to assemble
instructions not available (or in some cases, available but just
considered deprecated) in the target isa level. the obscure v6t2
arch is used for v6 code so as to (1) allow generation of thumb2
output if -mthumb is active, and (2) avoid warnings/errors for mcr
barriers that clang would produce if we just set arch to v7-a.
in addition, the __aeabi_read_tp function is moved out of the inner
workings and implemented as an asm wrapper around a C function, so
that asm code does not need to read global data. the asm wrapper
serves to satisfy the ABI calling convention requirements for this
function.
|
|
|
|
based on patch by Timo Teräs:
While generally this is a bad API, it is the only existing API to
affect c++ (std::thread) and c11 (thrd_create) thread stack size.
This patch allows applications only to increate stack and guard
page sizes.
|
|
commit 33ce920857405d4f4b342c85b74588a15e2702e5 broke pthread_create
in the case where a null attribute pointer is passed; rather than
using the default sizes, sizes of 0 (plus the remainder of one page
after TLS/TCB use) were used.
|
|
previously, the pthread_attr_t object was always initialized all-zero,
and stack/guard size were represented as differences versus their
defaults. this required lots of confusing offset arithmetic everywhere
they were used. instead, have pthread_attr_init fill in the default
values, and work with absolute sizes everywhere.
|
|
the thread name is displayed by gdb's "info threads".
|
|
|
|
Linux's documentation (robust-futex-ABI.txt) claims that, when a
process dies with a futex on the robust list, bit 30 (0x40000000) is
set to indicate the status. however, what actually happens is that
bits 0-30 are replaced with the value 0x40000000, i.e. bits 0-29
(containing the old owner tid) are cleared at the same time bit 30 is
set.
our userspace-side code for robust mutexes was written based on that
documentation, assuming that kernel would never produce a futex value
of 0x40000000, since the low (owner) bits would always be non-zero.
commit d338b506e39b1e2c68366b12be90704c635602ce introduced this
assumption explicitly while fixing another bug in how non-recoverable
status for robust mutexes was tracked. presumably the tests conducted
at that time only checked non-process-shared robust mutexes, which are
handled in pthread_exit (which implemented the documented kernel
protocol, not the actual one) rather than by the kernel.
change pthread_exit robust list processing to match the kernel
behavior, clearing bits 0-29 while setting bit 30, and use the value
0x7fffffff instead of 0x40000000 to encode non-recoverable status. the
choice of value here is arbitrary; any value with at least one of bits
0-29 set should work just as well,
|
|
|
|
per the powerpc psabi, offset 4 of the stack at call time belongs to
the callee and is used for spilling lr (return address). in addition,
offset 0 on the stack must contain a pointer to the previous stack
frame, or a null pointer for the initial stack frame of a thread.
__clone failed to setup any stack frame on the new thread's stack,
thereby allowing the start function it called to clobber offset 4 of
the new thread's struct __pthread, which contains the dtv pointer.
add code to setup a proper stack frame and align the stack pointer to
a multiple of 16 (also an abi requirement) if it was not already
aligned.
|
|
based on patch submitted by Jaydeep Patil, with minor changes.
|
|
patch by Mahesh Bodapati and Jaydeep Patil of Imagination
Technologies.
|
|
the workaround was for a bug that botched .gpword references to local
labels, applying a nonsensical random offset of -0x4000 to them.
this reverses commit 5e396fb996a80b035d0f6ecf7fed50f68aa3ebb7 and a
removes a similar hack that was added to syscall_cp.s in the later
commit 756c8af8589265e99e454fe3adcda1d0bc5e1963. it turns out one
additional instance of the same idiom, the GETFUNCSYM macro in
arch/mips/reloc.h, was still affected by the assembler bug and does
not admit an easy workaround without making assumptions about how the
macro is used. the previous workarounds made static linking work but
left the early-stage dynamic linker broken and thus had limited
usefulness.
instead, affected users (using binutils versions older than 2.20) will
need to fix the bug on the binutils side; the trivial patch is commit
453f5985b13e35161984bf1bf657bbab11515aa4 in the binutils-gdb
repository.
|
|
the old __cp_cancel code path loaded the address of __cancel from the
GOT using the $gp register, which happened to be set to point to the
correct GOT by the calling C function, but there is no ABI requirement
that this happen. instead, go the roundabout way and compute the
address of __cancel via pc-relative and gp-relative addressing
starting with a fake return address generated by a bal instruction,
which is the same trick crt1 uses to bootstrap.
|
|
not only is pthread_kill expensive in this case; it also breaks
testing under qemu app-level emulation.
|
|
this file's .data section was not aligned, and just happened to get
the correct alignment with past builds. it's likely that the move of
atomic.s from arch/arm/src to src/thread/arm caused the change in
alignment, which broke the atomic and thread-pointer access fragments
on actual armv5 hardware.
|