summaryrefslogtreecommitdiff
path: root/arch/arm/pthread_arch.h
AgeCommit message (Collapse)AuthorFilesLines
2016-12-19rework arm atomic/tp backends to be thumb-compatible and fdpic-readyRich Felker1-7/+9
three problems are addressed: - use of pc arithmetic, which was difficult if not impossible to make correct in thumb mode on all models, so that relative rather than absolute pointers to the backends could be used. this was designed back when there was no coherent model for the early stages of the dynamic linker before relocations, and is no longer necessary. - assumption that data (the relative pointers to the backends) can be accessed at a constant displacement from the code. this will not be possible on future fdpic subarchs (for cortex-m), so move responsibility for loading the backend code address to the caller. - hard-coded arm opcodes using the .word directive. instead, use the .arch directive to work around the assembler's refusal to assemble instructions not available (or in some cases, available but just considered deprecated) in the target isa level. the obscure v6t2 arch is used for v6 code so as to (1) allow generation of thumb2 output if -mthumb is active, and (2) avoid warnings/errors for mcr barriers that clang would produce if we just set arch to v7-a. in addition, the __aeabi_read_tp function is moved out of the inner workings and implemented as an asm wrapper around a C function, so that asm code does not need to read global data. the asm wrapper serves to satisfy the ABI calling convention requirements for this function.
2015-11-02properly access mcontext_t program counter in cancellation handlerRich Felker1-1/+1
using the actual mcontext_t definition rather than an overlaid pointer array both improves correctness/readability and eliminates some ugly hacks for archs with 64-bit registers bit 32-bit program counter. also fix UB due to comparison of pointers not in a common array object.
2015-10-15mark arm thread-pointer-loading inline asm as volatileRich Felker1-3/+3
this builds on commits a603a75a72bb469c6be4963ed1b55fabe675fe15 and 0ba35d69c0e77b225ec640d2bd112ff6d9d3b2af to ensure that a compiler cannot conclude that it's valid to reorder the asm to a point before the thread pointer is set up, or to treat the inline function as if it were declared with attribute((const)). other archs already use volatile asm for thread pointer loading.
2015-10-15remove attribute((const)) from arm __pthread_self inline functionRich Felker1-2/+2
commit a603a75a72bb469c6be4963ed1b55fabe675fe15 did this for the public pthread_self function but not the internal inline one.
2014-11-19overhaul ARM atomics/tls for performance and compatibilityRich Felker1-3/+11
previously, builds for pre-armv6 targets hard-coded use of the "kuser helper" system for atomics and thread-pointer access, resulting in binaries that fail to run (crash) on systems where this functionality has been disabled (as a security/hardening measure) in the kernel. additionally, builds for armv6 hard-coded an outdated/deprecated memory barrier instruction which may require emulation (extremely slow) on future models. this overhaul replaces the behavior for all pre-armv7 builds (both of the above cases) to perform runtime detection of the appropriate mechanisms for barrier, atomic compare-and-swap, and thread pointer access. detection is based on information provided by the kernel in auxv: presence of the HWCAP_TLS bit for AT_HWCAP and the architecture version encoded in AT_PLATFORM. direct use of the instructions is preferred when possible, since probing for the existence of the kuser helper page would be difficult and would incur runtime cost. for builds targeting armv7 or later, the runtime detection code is not compiled at all, and much more efficient versions of the non-cas atomic operations are provided by using ldrex/strex directly rather than wrapping cas.
2014-04-30fix arm thread-pointer/atomic asm when compiling to thumb codeRich Felker1-3/+2
armv7/thumb2 provides a way to do atomics in thumb mode, but for armv6 we need a call to arm mode. this commit is based on a patch by Stephen Thomas which fixed the armv7 cases but not the armv6 ones. all of this should be revisited if/when runtime selection of thread pointer access and atomics are added.
2014-04-07use inline atomics and thread pointer on arm models supporting themRich Felker1-1/+15
this is perhaps not the optimal implementation; a_cas still compiles to nested loops due to the different interface contracts of the kuser helper cas function (whose contract this patch implements) and the a_cas function (whose contract mimics the x86 cmpxchg). fixing this may be possible, but it's more complicated and thus deferred until a later time. aside from improving performance and code size, this patch also provides a means of producing binaries which can run on hardened kernels where the kuser helpers have been disabled. however, at present this requires producing binaries for armv6k or later, which will not run on older cpus. a real solution to the problem of kernels that omit the kuser helpers would be runtime detection, so that universal binaries which run on all arm cpu models can also be compatible with all kernel hardening profiles. robust detection however is a much harder problem, and will be addressed at a later time.
2012-10-15add support for TLS variant I, presently needed for arm and mipsRich Felker1-3/+6
despite documentation that makes it sound a lot different, the only ABI-constraint difference between TLS variants II and I seems to be that variant II stores the initial TLS segment immediately below the thread pointer (i.e. the thread pointer points to the end of it) and variant I stores the initial TLS segment above the thread pointer, requiring the thread descriptor to be stored below. the actual value stored in the thread pointer register also tends to have per-arch random offsets applied to it for silly micro-optimization purposes. with these changes applied, TLS should be basically working on all supported archs except microblaze. I'm still working on getting the necessary information and a working toolchain that can build TLS binaries for microblaze, but in theory, static-linked programs with TLS and dynamic-linked programs where only the main executable uses TLS should already work on microblaze. alignment constraints have not yet been heavily tested, so it's possible that this code does not always align TLS segments correctly on archs that need TLS variant I.
2012-02-25use __attribute__((const)) on arm __pthread_self functionRich Felker1-1/+3
2011-09-22"optimize" arm __pthread_selfRich Felker1-4/+1
actually this is just to avoid gcc being stupid and refusing to inline the function version, even when the size cost is essentially identical whether it's inlined or not.
2011-09-18initial commit of the arm portRich Felker1-0/+7
this port assumes eabi calling conventions, eabi linux syscall convention, and presence of the kernel helpers at 0xffff0f?0 needed for threads support. otherwise it makes very few assumptions, and the code should work even on armv4 without thumb support, as well as on systems with thumb interworking. the bits headers declare this a little endian system, but as far as i can tell the code should work equally well on big endian. some small details are probably broken; so far, testing has been limited to qemu/aboriginal linux.