Age | Commit message (Collapse) | Author | Files | Lines |
|
binutils commit bada43421274615d0d5f629a61a60b7daa71bc15 tightened
immediate fixup handling in gas in such a way that the final .arch of
an object file must be compatible with the fixups used when the
instruction was assembled; this in turn broke assembling of atomics.s,
at least in thumb mode.
it's not clear whether this should be considered a bug in gas, but
.object_arch is preferable anyway for our purpose here of controlling
the ISA level tag on the object file being produced, and it's the
intended directive for use in object files with runtime code
selection. research by Szabolcs Nagy confirmed that .object_arch is
supported in all relevant versions of binutils and clang's integrated
assembler.
patch by Reiner Herrmann.
|
|
commit 78a8ef47c4d92b7680c52a85f80a81e29da86bb9 inadvertently removed
the SA_RESTART flag from the sigaction for the internal signal handler
used by __synccall for broadcasting. as a result, programs which did
not use interrupting signals but which used set*id() in a
multithreaded context could wrongly observe EINTR errors they're not
prepared to handle.
|
|
x32 has another gratuitous difference to all other archs:
it passes an array of 64bit values to __tls_get_addr().
usually it is an array of size_t.
|
|
three problems are addressed:
- use of pc arithmetic, which was difficult if not impossible to make
correct in thumb mode on all models, so that relative rather than
absolute pointers to the backends could be used. this was designed
back when there was no coherent model for the early stages of the
dynamic linker before relocations, and is no longer necessary.
- assumption that data (the relative pointers to the backends) can be
accessed at a constant displacement from the code. this will not be
possible on future fdpic subarchs (for cortex-m), so move
responsibility for loading the backend code address to the caller.
- hard-coded arm opcodes using the .word directive. instead, use the
.arch directive to work around the assembler's refusal to assemble
instructions not available (or in some cases, available but just
considered deprecated) in the target isa level. the obscure v6t2
arch is used for v6 code so as to (1) allow generation of thumb2
output if -mthumb is active, and (2) avoid warnings/errors for mcr
barriers that clang would produce if we just set arch to v7-a.
in addition, the __aeabi_read_tp function is moved out of the inner
workings and implemented as an asm wrapper around a C function, so
that asm code does not need to read global data. the asm wrapper
serves to satisfy the ABI calling convention requirements for this
function.
|
|
|
|
based on patch by Timo Teräs:
While generally this is a bad API, it is the only existing API to
affect c++ (std::thread) and c11 (thrd_create) thread stack size.
This patch allows applications only to increate stack and guard
page sizes.
|
|
commit 33ce920857405d4f4b342c85b74588a15e2702e5 broke pthread_create
in the case where a null attribute pointer is passed; rather than
using the default sizes, sizes of 0 (plus the remainder of one page
after TLS/TCB use) were used.
|
|
previously, the pthread_attr_t object was always initialized all-zero,
and stack/guard size were represented as differences versus their
defaults. this required lots of confusing offset arithmetic everywhere
they were used. instead, have pthread_attr_init fill in the default
values, and work with absolute sizes everywhere.
|
|
the thread name is displayed by gdb's "info threads".
|
|
|
|
Linux's documentation (robust-futex-ABI.txt) claims that, when a
process dies with a futex on the robust list, bit 30 (0x40000000) is
set to indicate the status. however, what actually happens is that
bits 0-30 are replaced with the value 0x40000000, i.e. bits 0-29
(containing the old owner tid) are cleared at the same time bit 30 is
set.
our userspace-side code for robust mutexes was written based on that
documentation, assuming that kernel would never produce a futex value
of 0x40000000, since the low (owner) bits would always be non-zero.
commit d338b506e39b1e2c68366b12be90704c635602ce introduced this
assumption explicitly while fixing another bug in how non-recoverable
status for robust mutexes was tracked. presumably the tests conducted
at that time only checked non-process-shared robust mutexes, which are
handled in pthread_exit (which implemented the documented kernel
protocol, not the actual one) rather than by the kernel.
change pthread_exit robust list processing to match the kernel
behavior, clearing bits 0-29 while setting bit 30, and use the value
0x7fffffff instead of 0x40000000 to encode non-recoverable status. the
choice of value here is arbitrary; any value with at least one of bits
0-29 set should work just as well,
|
|
|
|
per the powerpc psabi, offset 4 of the stack at call time belongs to
the callee and is used for spilling lr (return address). in addition,
offset 0 on the stack must contain a pointer to the previous stack
frame, or a null pointer for the initial stack frame of a thread.
__clone failed to setup any stack frame on the new thread's stack,
thereby allowing the start function it called to clobber offset 4 of
the new thread's struct __pthread, which contains the dtv pointer.
add code to setup a proper stack frame and align the stack pointer to
a multiple of 16 (also an abi requirement) if it was not already
aligned.
|
|
based on patch submitted by Jaydeep Patil, with minor changes.
|
|
patch by Mahesh Bodapati and Jaydeep Patil of Imagination
Technologies.
|
|
the workaround was for a bug that botched .gpword references to local
labels, applying a nonsensical random offset of -0x4000 to them.
this reverses commit 5e396fb996a80b035d0f6ecf7fed50f68aa3ebb7 and a
removes a similar hack that was added to syscall_cp.s in the later
commit 756c8af8589265e99e454fe3adcda1d0bc5e1963. it turns out one
additional instance of the same idiom, the GETFUNCSYM macro in
arch/mips/reloc.h, was still affected by the assembler bug and does
not admit an easy workaround without making assumptions about how the
macro is used. the previous workarounds made static linking work but
left the early-stage dynamic linker broken and thus had limited
usefulness.
instead, affected users (using binutils versions older than 2.20) will
need to fix the bug on the binutils side; the trivial patch is commit
453f5985b13e35161984bf1bf657bbab11515aa4 in the binutils-gdb
repository.
|
|
the old __cp_cancel code path loaded the address of __cancel from the
GOT using the $gp register, which happened to be set to point to the
correct GOT by the calling C function, but there is no ABI requirement
that this happen. instead, go the roundabout way and compute the
address of __cancel via pc-relative and gp-relative addressing
starting with a fake return address generated by a bal instruction,
which is the same trick crt1 uses to bootstrap.
|
|
not only is pthread_kill expensive in this case; it also breaks
testing under qemu app-level emulation.
|
|
this file's .data section was not aligned, and just happened to get
the correct alignment with past builds. it's likely that the move of
atomic.s from arch/arm/src to src/thread/arm caused the change in
alignment, which broke the atomic and thread-pointer access fragments
on actual armv5 hardware.
|
|
|
|
all such arch-specific translation units are being moved to
appropriate arch dirs under the main src tree.
|
|
this is possible with the new build system that allows src/*/$(ARCH)/*
files which do not shadow a file in the parent directory, and yields a
more logical organization. eventually it will be possible to remove
arch/*/src from the build system.
|
|
sh needs runtime-selected atomic backends since there are a number of
supported models that use non-forwards-compatible (non-smp-compatible)
atomic mechanisms. previously, the code paths for this were highly
inefficient since they involved C function calls with multiple
branches in the callee and heavy spills in the caller. the new code
performs calls the runtime-selected asm fragment from inline asm with
extremely minimal clobbers, rather than using a function call.
for the sh4a case where the atomic mechanism is known and there is no
forward-compatibility issue, the movli.l and movco.l instructions are
provided as a_ll and a_sc, allowing the new shared atomic.h to
generate efficient inline versions of all the basic atomic operations
without needing a cas loop.
|
|
this was only a tiny optimization, and static-linked binaries should
not be calling __tls_get_addr anyway since the linker is supposed to
perform relaxation, resulting in use of the local-exec TLS model.
|
|
this is the first and simplest stage of removal of the SHARED macro,
which will eventually allow libc.a and libc.so to be produced from the
same object files.
the original motivation for these #ifdefs which are now being removed
was to allow building a static-only libc using a compiler that does
not support visibility. however, SHARED was the wrong condition to
test for this anyway; various assembly-language sources refer to
hidden symbols and declare them with the .hidden directive, making it
wrong to define the referenced symbols as non-hidden. if there is a
need in the future to build libc using compilers that lack visibility,
support could be moved to the build system or perhaps the __PIC__
macro could be checked instead of SHARED.
|
|
these files are all accepted as legacy arm syntax when producing arm
code, but legacy syntax cannot be used for producing thumb2 with
access to the full ISA. even after switching to UAL, some asm source
files contain instructions which are not valid in thumb mode, so these
will need to be addressed separately.
|
|
the idea of the three-instruction sequence being removed was to be
able to return to thumb code when used on armv4t+ from a thumb caller,
but also to be able to run on armv4 without the bx instruction
available (in which case the low bit of lr would always be 0).
however, without compiler support for generating such a sequence from
C code, which does not exist and which there is unlikely to be
interest in implementing, there is little point in having it in the
asm, and it would likely be easier to add pre-armv4t support via
enhanced linker handling of R_ARM_V4BX than at the compiler level.
removing this code simplifies adding support for building libc in
thumb2-only form (for cortex-m).
|
|
previously, only archs that needed to do stack cleanup defined a
__cp_cancel label for acting on cancellation in their syscall asm, and
a default definition was provided by a weak alias to __cancel, the C
function. this resulted in wrong codegen for arm on gcc versions
affected by pr 68178 and possibly similar issues (like pr 66609) on
other archs, and also created an inconsistency where the __cp_begin
and __cp_end labels were treated as const data but __cp_cancel was
treated as a function. this in turn caused incorrect code generation
on archs where function pointers point to function descriptors rather
than code (for now, only sh/fdpic).
|
|
using the actual mcontext_t definition rather than an overlaid pointer
array both improves correctness/readability and eliminates some ugly
hacks for archs with 64-bit registers bit 32-bit program counter.
also fix UB due to comparison of pointers not in a common array
object.
|
|
POSIX requires pthread_join to synchronize memory on success. The
futex wait inside __timedwait_cp cannot handle this because it's not
called in all cases. Also, in the case of a spurious wake, tid can
become zero between the wake and when the joining thread checks it.
|
|
clone calls back to a function pointer provided by the caller, which
will actually be a pointer to a function descriptor on fdpic. the
obvious solution is to have a separate version of clone for fdpic, but
I have taken a simpler approach to go around the problem. instead of
calling the pointed-to function from asm, a direct call is made to an
internal C function which then calls the pointed-to function. this
lets the C compiler generate the appropriate calling convention for an
indirect call with no need for ABI-specific assembly.
|
|
the TLS ABI spec for mips, powerpc, and some other (presently
unsupported) RISC archs has the return value of __tls_get_addr offset
by +0x8000 and the result of DTPOFF relocations offset by -0x8000. I
had previously assumed this part of the ABI was actually just an
implementation detail, since the adjustments cancel out. however, when
the local dynamic model is used for accessing TLS that's known to be
in the same DSO, either of the following may happen:
1. the -0x8000 offset may already be applied to the argument structure
passed to __tls_get_addr at ld time, without any opportunity for
runtime relocations.
2. __tls_get_addr may be used with a zero offset argument to obtain a
base address for the module's TLS, to which the caller then applies
immediate offsets for individual objects accessed using the local
dynamic model. since the immediate offsets have the -0x8000 adjustment
applied to them, the base address they use needs to include the
+0x8000 offset.
it would be possible, but more complex, to store the pointers in the
dtv[] array with the +0x8000 offset pre-applied, to avoid the runtime
cost of adding 0x8000 on each call to __tls_get_addr. this change
could be made later if measurements show that it would help.
|
|
linux kernel commit 46e12c07b3b9603c60fc1d421ff18618241cb081 caused
the mips syscall mechanism to fail with EFAULT when the userspace
stack pointer is invalid, breaking __unmapself used for detached
thread exit. the workaround is to set $sp to a known-valid, readable
address, and the simplest one to obtain is the address of the current
function, which is available (per o32 calling convention) in $25.
|
|
this error simply indicated a system without memory protection (NOMMU)
and should not cause failure in the caller.
|
|
nominally the low bits of the trap number on sh are the number of
syscall arguments, but they have never been used by the kernel, and
some code making syscalls does not even know the number of arguments
and needs to pass an arbitrary high number anyway.
sh3/sh4 traditionally used the trap range 16-31 for syscalls, but part
of this range overlapped with hardware exceptions/interrupts on sh2
hardware, so an incompatible range 32-47 was chosen for sh2.
using trap number 31 everywhere, since it's in the existing sh3/sh4
range and does not conflict with sh2 hardware, is a proposed
unification of the kernel syscall convention that will allow binaries
to be shared between sh2 and sh3/sh4. if this is not accepted into the
kernel, we can refit the sh2 target with runtime selection mechanisms
for the trap number, but doing so would be invasive and would entail
non-trivial overhead.
|
|
due to the way the interrupt and syscall trap mechanism works,
userspace on sh2 must never set the stack pointer to an invalid value.
thus, the approach used on most archs, where __unmapself executes with
no stack for the interval between SYS_munmap and SYS_exit, is not
viable on sh2.
in order not to pessimize sh3/sh4, the sh asm version of __unmapself
is not removed. instead it's renamed and redirected through code that
calls either the generic (safe) __unmapself or the sh3/sh4 asm,
depending on compile-time and run-time conditions.
|
|
the sh2 target is being considered an ISA subset of sh3/sh4, in the
sense that binaries built for sh2 are intended to be usable on later
cpu models/kernels with mmu support. so rather than hard-coding
sh2-specific atomics, the runtime atomic selection mechanisms that was
already in place has been extended to add sh2 atomics.
at this time, the sh2 atomics are not SMP-compatible; since the ISA
lacks actual atomic operations, the new code instead masks interrupts
for the duration of the atomic operation, producing an atomic result
on single-core. this is only possible because the kernel/hardware does
not impose protections against userspace doing so. additional changes
will be needed to support future SMP systems.
care has been taken to avoid producing significant additional code
size in the case where it's known at compile-time that the target is
not sh2 and does not need sh2-specific code.
|
|
functions which open in-memory FILE stream variants all shared a tail
with __fdopen, adding the FILE structure to stdio's open file list.
replacing this common tail with a function call reduces code size and
duplication of logic. the list is also partially encapsulated now.
function signatures were chosen to facilitate tail call optimization
and reduce the need for additional accessor functions.
with these changes, static linked programs that do not use stdio no
longer have an open file list at all.
|
|
this can be used to put off writing an asm version of __unmapself for
new archs, or as a permanent solution on archs where it's not
practical or even possible to run momentarily with no stack.
the concept here is simple: the caller takes a lock on a global shared
stack and uses it to make the munmap and exit syscalls. the only trick
is unlocking, which must be done after the thread exits, and this is
achieved by using the set_tid_address syscall to have the kernel zero
and futex-wake the lock word as part of the exit syscall.
|
|
otherwise disassemblers treat it as data.
|
|
the code being removed used atomics to track whether any threads might
be using a locale other than the current global locale, and whether
any threads might have abstract 8-bit (non-UTF-8) LC_CTYPE active, a
feature which was never committed (still pending). the motivations
were to support early execution prior to setup of the thread pointer,
to partially support systems (ancient kernels) where thread pointer
setup is not possible, and to avoid high performance cost on archs
where accessing the thread pointer may be very slow.
since commit 19a1fe670acb3ab9ead0fe31859ca7d4fe40dd54, the thread
pointer is always available, so these hacks are no longer needed.
removing them greatly simplifies the affected code.
|
|
commit f630df09b1fd954eda16e2f779da0b5ecc9d80d3 added logic to handle
the case where __set_thread_area is called more than once by reusing
the GDT slot already in the %gs register, and only setting up a new
GDT slot when %gs is zero. this created a hidden assumption that %gs
is zero when a new process image starts, which is true in practice on
Linux, but does not seem to be documented ABI, and fails to hold under
qemu app-level emulation.
while it would in theory be possible to zero %gs in the entry point
code, this code is shared between static and dynamic binaries, and
dynamic binaries must not clobber the value of %gs already setup by
the dynamic linker.
the alternative solution implemented in this commit simply uses global
data to store the GDT index that's selected. __set_thread_area should
only be called in the initial thread anyway (subsequent threads get
their thread pointer setup by __clone), but even if it were called by
another thread, it would simply read and write back the same GDT index
that was already assigned to the initial thread, and thus (in the x86
memory model) there is no data race.
|
|
i386, x86_64, x32, and powerpc all use TLS for stack protector canary
values in the default stack protector ABI, but the location only
matched the ABI on i386 and x86_64. on x32, the expected location for
the canary contained the tid, thus producing spurious mismatches
(resulting in process termination) upon fork. on powerpc, the expected
location contained the stdio_locks list head, so returning from a
function after calling flockfile produced spurious mismatches. in both
cases, the random canary was not present, and a predictable value was
used instead, making the stack protector hardening much less effective
than it should be.
in the current fix, the thread structure has been expanded to have
canary fields at all three possible locations, and archs that use a
non-default location must define a macro in pthread_arch.h to choose
which location is used. for most archs (which lack TLS canary ABI) the
choice does not matter.
|
|
the kernel does not properly clear the upper bits of the syscall
argument, so we have to do it before the syscall.
|
|
use CAS instead of swap since it's lighter for most archs, and keep
EBUSY in the lock value so that the old value obtained by CAS can be
used directly as the return value for pthread_spin_trylock.
|
|
|
|
the leak was found by static analysis (reported by Alexander Monakov),
not tested/observed, but seems to have occured both when failing due
to O_EXCL, and in a race condition with O_CREAT but not O_EXCL where a
semaphore by the same name was created concurrently.
|
|
this fixes truncation of error messages containing long pathnames or
symbol names.
the dlerror state was previously required by POSIX to be global. the
resolution of bug 97 relaxed the requirements to allow thread-safe
implementations of dlerror with thread-local state and message buffer.
|
|
even hidden functions need @PLT symbol references; otherwise an
absolute address is produced instead of a PC-relative one.
|
|
this caused the dynamic linker/startup code to abort when r0 happened
to contain a negative value.
|