Age | Commit message (Collapse) | Author | Files | Lines |
|
these are not a public interface and are not intended to be callable
from anywhere but the public clone function or other places in libc.
|
|
|
|
this functions is glue for linking dependency logic.
|
|
|
|
it's already included in all places where these are needed, and aside
from __tls_get_addr, they're all implementation internals.
|
|
|
|
|
|
eliminate gratuitous glue function for reporting the version, which
was probably leftover from the old dynamic linker design which lacked
a clear barrier for when/how it could access global data. put the
declaration for the data object that replaces it in libc.h where it
can be type checked.
|
|
logically these belong to the intersection of the stdio and pthread
subsystems, and either place the declarations could go (stdio_impl.h
or pthread_impl.h) requires a forward declaration for one of the
argument types.
|
|
|
|
syscall.h was chosen as the header to declare it, since its intended
usage is alongside syscalls as a fallback for operations the direct
syscall does not support.
|
|
|
|
this cleans up what had become widespread direct inline use of "GNU C"
style attributes directly in the source, and lowers the barrier to
increased use of hidden visibility, which will be useful to recovering
some of the efficiency lost when the protected visibility hack was
dropped in commit dc2f368e565c37728b0d620380b849c3a1ddd78f, especially
on archs where the PLT ABI is costly.
|
|
|
|
three ABIs are supported: the default with 68881 80-bit fpu format and
results returned in floating point registers, softfloat-only with the
same format, and coldfire fpu with IEEE single/double only. only the
first is tested at all, and only under qemu which has fpu emulation
bugs.
basic functionality smoke tests have been performed for the most
common arch-specific breakage via libc-test and qemu user-level
emulation. some sysvipc failures remain, but are shared with other big
endian archs and will be fixed separately.
|
|
since x86 and m68k are the only archs with 80-bit long double and each
has mandatory endianness, select the variant via endianness.
differences are minor: apparently just byte order and representation
of infinities. the m68k format is not well-documented anywhere I could
find, so if other differences are found they may require additional
changes later.
|
|
the wrapper start function that performs scheduling operations is
unreachable if pthread_attr_setinheritsched is never called, so move
it there rather than the pthread_create source file, saving some code
size for static-linked programs.
|
|
eliminate the awkward startlock mechanism and corresponding fields of
the pthread structure that were only used at startup.
instead of having pthread_create perform the scheduling operations and
having the new thread wait for them to be completed, start the new
thread with a wrapper start function that performs its own scheduling,
sending the result code back via a futex. this way the new thread can
use storage from the calling thread's stack rather than permanent
fields in the pthread structure.
|
|
over time the pthread structure has accumulated a lot of cruft taking
up size. this commit removes unused fields and packs booleans and
other small data more efficiently. changes which would also require
changing code are not included at this time.
non-volatile booleans are packed as unsigned char bitfield members.
the canceldisable and cancelasync fields need volatile qualification
due to how they're accessed from the cancellation signal handler and
cancellable syscalls called from signal handlers. since volatile
bitfield semantics are not clearly defined, discrete char objects are
used instead.
the pid field is completely removed; it has been unused since commit
83dc6eb087633abcf5608ad651d3b525ca2ec35e.
the tid field's type is changed to int because its use is as a value
in futexes, which are defined as plain int. it has no conceptual
relationship to pid_t. also, its position is not ABI.
startlock is reduced to a length-1 array. the second element was
presumably intended as a waiter count, but it was never used and made
no sense, since there is at most one waiter.
|
|
previously, some accesses to the detached state (from pthread_join and
pthread_getattr_np) were unsynchronized; they were harmless in
programs with well-defined behavior, but ugly. other accesses (in
pthread_exit and pthread_detach) were synchronized by a poorly named
"exitlock", with an ad-hoc trylock operation on it open-coded in
pthread_detach, whose only purpose was establishing protocol for which
thread is responsible for deallocation of detached-thread resources.
instead, use an atomic detach_state and unify it with the futex used
to wait for thread exit. this eliminates 2 members from the pthread
structure, gets rid of the hackish lock usage, and makes rigorous the
trap added in commit 80bf5952551c002cf12d96deb145629765272db0 for
catching attempts to join detached threads. it should also make
attempt to detach an already-detached thread reliably trap.
|
|
if the last thread exited via pthread_exit, the logic that marked it
dead did not account for the possibility of it targeting itself via
atexit handlers. for example, an atexit handler calling
pthread_kill(pthread_self(), SIGKILL) would return success
(previously, ESRCH) rather than causing termination via the signal.
move the release of killlock after the determination is made whether
the exiting thread is the last thread. in the case where it's not,
move the release all the way to the end of the function. this way we
can clear the tid rather than spending storage on a dedicated
dead-flag. clearing the tid is also preferable in that it hardens
against inadvertent use of the value after the thread has terminated
but before it is joined.
|
|
the tid field in the pthread structure is not volatile, and really
shouldn't be, so as not to limit the compiler's ability to reorder,
merge, or split loads in code paths that may be relevant to
performance (like controlling lock ownership).
however, use of objects which are not volatile or atomic with futex
wait is inherently broken, since the compiler is free to transform a
single load into multiple loads, thereby using a different value for
the controlling expression of the loop and the value passed to the
futex syscall, leading the syscall to block instead of returning.
reportedly glibc's pthread_join was actually affected by an equivalent
issue in glibc on s390.
add a separate, dedicated join_futex object for pthread_join to use.
|
|
commit 618b18c78e33acfe54a4434e91aa57b8e171df89 removed the previous
detection and hardening since it was incorrect. commit
72141795d4edd17f88da192447395a48444afa10 already handled all that
remained for hardening the static-linked case. in the dynamic-linked
case, have the dynamic linker check whether malloc was replaced and
make that information available.
with these changes, the properties documented in commit
c9f415d7ea2dace5bf77f6518b6afc36bb7a5732 are restored: if calloc is
not provided, it will behave as malloc+memset, and any of the
memalign-family functions not provided will fail with ENOMEM.
|
|
this change serves multiple purposes:
1. it ensures that static linking of memalign-family functions will
pull in the system malloc implementation, thereby causing link errors
if an attempt is made to link the system memalign functions with a
replacement malloc (incomplete allocator replacement).
2. it eliminates calls to free that are unpaired with allocations,
which are confusing when setting breakpoints or tracing execution.
as a bonus, making __bin_chunk external may discourage aggressive and
unnecessary inlining of it.
|
|
|
|
Update atomic.h to provide a_ctz_l in all cases (atomic_arch.h should
now only provide a_ctz_32 and/or a_ctz_64).
The generic version of a_ctz_32 now takes advantage of a_clz_32 if
available and the generic a_ctz_64 now makes use of a_ctz_32.
|
|
previously this macro used an odd if/else form instead of the more
idiomatic do/while(0), making it unsafe against omission of trailing
semicolon. the omission would make the following statement conditional
instead of producing an error.
|
|
in the original submission of the patch that became commit
7c709f2d4f9872d1b445f760b0e68da89e256b9e, and in subsequent reading of
it by others, it was not clear that the new member had to be inserted
before canary_at_end, or that inserting it at that location was safe.
add comments to document.
|
|
|
|
In all cases this is just a change from two volatile int to one.
|
|
A variant of this new lock algorithm has been presented at SAC'16, see
https://hal.inria.fr/hal-01304108. A full version of that paper is
available at https://hal.inria.fr/hal-01236734.
The main motivation of this is to improve on the safety of the basic lock
implementation in musl. This is achieved by squeezing a lock flag and a
congestion count (= threads inside the critical section) into a single
int. Thereby an unlock operation does exactly one memory
transfer (a_fetch_add) and never touches the value again, but still
detects if a waiter has to be woken up.
This is a fix of a use-after-free bug in pthread_detach that had
temporarily been patched. Therefore this patch also reverts
c1e27367a9b26b9baac0f37a12349fc36567c8b6
This is also the only place where internal knowledge of the lock
algorithm is used.
The main price for the improved safety is a little bit larger code.
Under high congestion, the scheduling behavior will be different
compared to the previous algorithm. In that case, a successful
put-to-sleep may appear out of order compared to the arrival in the
critical section.
|
|
counts leading zero bits of a 64bit int, undefined on zero input.
(has nothing to do with atomics, added to atomic.h so target specific
helper functions are together.)
there is a logarithmic generic implementation and another in terms of
a 32bit a_clz_32 on targets where that's available.
|
|
The flag 1<<7 is used in several places for different purposes that are
not always easy to distinguish. Mark those usages that correspond to the
flag that is used by the kernel for futexes.
|
|
the old limit was one byte too short to support locale names of the
form xx_XX.UTF-8@modifier where modifier is more than 3 bytes, a form
which various real-world locale names take. the problem could be
avoided by omitting the useless ".UTF-8" part, but users may need to
have it present when operating on mixed-libc systems or when it will
be carried over (e.g. across ssh) to other systems.
the new limit is chosen sufficient for existing/reasonable locale
names while still keeping the size of setlocale's static buffer small.
also add locale_impl.h to the Makefile's list of headers which force
rebuild of source files, to prevent dangerously inconsistent object
files from getting used after this change.
|
|
x32 has another gratuitous difference to all other archs:
it passes an array of 64bit values to __tls_get_addr().
usually it is an array of size_t.
|
|
ISO C and POSIX only specify behavior for base arguments of 0 and
2-36; POSIX mandates an EINVAL error for unsupported bases. it's not
clear that there's a requirement for implementations not to "support"
additional bases as an extension, but "base 1" did not work in any
meaningful way anyway, so it should be considered unsupported and thus
an error.
|
|
|
|
|
|
On s390x, the kernel provides AT_SYSINFO_EHDR, but sets it to zero, if the
program being run does not have a program interpreter. This causes
problems when running the dynamic linker directly.
|
|
alpha and s390x gratuitously use 64-bit entries (wasting 2x space and
cache utilization) despite the values always being 32-bit.
based on patch by Bobby Bingham, with changes suggested by Alexander
Monakov to use the public Elf_Symndx type from link.h (and make it
properly variable by arch) rather than adding new internal
infrastructure for handling the type.
|
|
commit 31fb174dd295e50f7c5cf18d31fcfd5fe5a063b7 used
DEFAULT_GUARD_SIZE from pthread_impl.h in a static initializer,
breaking build on archs where its definition, PAGE_SIZE, is not a
constant. instead, just define DEFAULT_GUARD_SIZE as 4096, the minimal
page size on any arch we support. pthread_create rounds up to whole
pages anyway, so defining it to 1 would also work, but a moderately
meaningful value is nicer to programs that use
pthread_attr_getguardsize on default-initialized attribute objects.
|
|
commit 6ffdc4579ffb34f4aab69ab4c081badabc7c0a9a set lnz in the code
path for non-zero digits after a huge string of zeros, but the
assignment of dc to lnz truncates if the value of dc does not fit in
int; this is possible for some pathologically long inputs, either via
strings on 64-bit systems or via scanf-family functions.
instead, simply set lnz to match the point at which we add the
artificial trailing 1 bit to simulate nonzero digits after a huge
run of zeros.
|
|
the mid-sized integer optimization relies on lnz set up properly
to mark the last non-zero decimal digit, but this was not done
if the non-zero digit lied outside the KMAX digits of the base
10^9 number representation.
so if the fractional part was a very long list of zeros (>2048*9 on
x86) followed by non-zero digits then the integer optimization could
kick in discarding the tiny non-zero fraction which can mean wrong
result on non-nearest rounding mode.
strtof, strtod and strtold were all affected.
|
|
in certain cases excessive trailing zeros could cause incorrect
rounding from long double to double or float in decfloat.
e.g. in strtof("9444733528689243848704.000000", 0) the argument
is 0x1.000001p+73, exactly halfway between two representible floats,
this incorrectly got rounded to 0x1.000002p+73 instead of 0x1p+73,
but with less trailing 0 the rounding was fine.
the fix makes sure that the z index always points one past the last
non-zero digit in the base 10^9 representation, this way trailing
zeros don't affect the rounding logic.
|
|
despite sh not generally using register-pair alignment for 64-bit
syscall arguments, there are arch-specific versions of the syscall
entry points for pread and pwrite which include a dummy argument for
alignment before the 64-bit offset argument.
|
|
|
|
based on patch submitted by Jaydeep Patil, with minor changes.
|
|
patch by Mahesh Bodapati and Jaydeep Patil of Imagination
Technologies.
|
|
this change is made in preparation for adding the mips64 port, which
needs a 64-bit (and mips64-specific) form of the R_INFO macro, but
it's a better abstraction anyway.
based on part of the mips64 port patch by Mahesh Bodapati and Jaydeep
Patil of Imagination Technologies.
|
|
No current ports do this, but it will be useful for porting to 64-bit ll/sc
architectures, such as mips64 and powerpc64.
|