summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Collapse)AuthorFilesLines
2015-04-17fix sh __set_thread_area uninitialized return valueRich Felker1-1/+2
this caused the dynamic linker/startup code to abort when r0 happened to contain a negative value.
2015-04-17redesign sigsetjmp so that signal mask is restored after longjmpRich Felker12-133/+177
the conventional way to implement sigsetjmp is to save the signal mask then tail-call to setjmp; siglongjmp then restores the signal mask and calls longjmp. the problem with this approach is that a signal already pending, or arriving between unmasking of signals and restoration of the saved stack pointer, will have its signal handler run on the stack that was active before siglongjmp was called. this can lead to unbounded stack usage when siglongjmp is used to leave a signal handler. in the new design, sigsetjmp saves its own return address inside the extended part of the sigjmp_buf (outside the __jmp_buf part used by setjmp) then calls setjmp to save a jmp_buf inside its own execution. it then tail-calls to __sigsetjmp_tail, which uses the return value of setjmp to determine whether to save the current signal mask or restore a previously-saved mask. as an added bonus, this design makes it so that siglongjmp and longjmp are identical. this is useful because the __longjmp_chk function we need to add for ABI-compatibility assumes siglongjmp and longjmp are the same, but for different reasons -- it was designed assuming either can access a flag just past the __jmp_buf indicating whether the signal masked was saved, and act on that flag. however, early versions of musl did not have space past the __jmp_buf for the non-sigjmp_buf version of jmp_buf, so our setjmp cannot store such a flag without risking clobbering memory on (very) old binaries.
2015-04-14use hidden __tls_get_new for tls/tlsdesc lookup fallback casesRich Felker4-5/+13
previously, the dynamic tlsdesc lookup functions and the i386 special-ABI ___tls_get_addr (3 underscores) function called __tls_get_addr when the slot they wanted was not already setup; __tls_get_addr would then in turn also see that it's not setup and call __tls_get_new. calling __tls_get_new directly is both more efficient and avoids the issue of calling a non-hidden (public API/ABI) function from asm. for the special i386 function, a weak reference to __tls_get_new is used since this function is not defined when static linking (the code path that needs it is unreachable in static-linked programs).
2015-04-14cleanup use of visibility attributes in pthread_cancel.cRich Felker1-8/+9
applying the attribute to a weak_alias macro was a hack. instead use a separate declaration to apply the visibility, and consolidate declarations together to avoid having visibility mess all over the file.
2015-04-14fix inconsistent visibility for internal syscall symbolsRich Felker12-1/+16
2015-04-14use hidden visibility for call from dlsym to internal __dlsymRich Felker11-3/+14
2015-04-14consistently use hidden visibility for cancellable syscall internalsRich Felker11-30/+96
in a few places, non-hidden symbols were referenced from asm in ways that assumed ld-time binding. while these is no semantic reason these symbols need to be hidden, fixing the references without making them hidden was going to be ugly, and hidden reduces some bloat anyway. in the asm files, .global/.hidden directives have been moved to the top to unclutter the actual code.
2015-04-14fix inconsistent visibility for internal __tls_get_new functionRich Felker2-3/+3
at the point of call it was declared hidden, but the definition was not hidden. for some toolchains this inconsistency produced textrels without ld-time binding.
2015-04-14use hidden visibility for i386 asm-internal __vsyscall symbolRich Felker1-0/+2
otherwise the call instruction in the inline syscall asm results in textrels without ld-time binding.
2015-04-14make _dlstart_c function use hidden visibilityRich Felker1-0/+1
otherwise the call/jump from the crt_arch.h asm may not resolve correctly without -Bsymbolic-functions.
2015-04-13remove initializers for decoded aux/dyn arrays in dynamic linkerRich Felker1-5/+5
the zero initialization is redundant since decode_vec does its own clearing, and it increases the risk that buggy compilers will generate calls to memset. as long as symbols are bound at ld time, such a call will not break anything, but it may be desirable to turn off ld-time binding in the future.
2015-04-13allow libc itself to be built with stack protector enabledRich Felker1-0/+10
this was already essentially possible as a result of the previous commits changing the dynamic linker/thread pointer bootstrap process. this commit mainly adds build system infrastructure: configure no longer attempts to disable stack protector. instead it simply determines how so the makefile can disable stack protector for a few translation units used during early startup. stack protector is also disabled for memcpy and memset since compilers (incorrectly) generate calls to them on some archs to implement struct initialization and assignment, and such calls may creep into early initialization. no explicit attempt to enable stack protector is made by configure at this time; any stack protector option supported by the compiler can be passed to configure in CFLAGS, and if the compiler uses stack protector by default, this default is respected.
2015-04-13remove remnants of support for running in no-thread-pointer modeRich Felker10-32/+13
since 1.1.0, musl has nominally required a thread pointer to be setup. most of the remaining code that was checking for its availability was doing so for the sake of being usable by the dynamic linker. as of commit 71f099cb7db821c51d8f39dfac622c61e54d794c, this is no longer necessary; the thread pointer is now valid before any libc code (outside of dynamic linker bootstrap functions) runs. this commit essentially concludes "phase 3" of the "transition path for removing lazy init of thread pointer" project that began during the 1.1.0 release cycle.
2015-04-13move thread pointer setup to beginning of dynamic linker stage 3Rich Felker1-8/+23
this allows the dynamic linker itself to run with a valid thread pointer, which is a prerequisite for stack protector on archs where the ssp canary is stored in TLS. it will also allow us to remove some remaining runtime checks for whether the thread pointer is valid. as long as the application and its libraries do not require additional size or alignment, this early thread pointer will be kept and reused at runtime. otherwise, a new static TLS block is allocated after library loading has finished and the thread pointer is switched over.
2015-04-13stabilize dynamic linker's layout of static TLSRich Felker1-9/+6
previously, the layout of the static TLS block was perturbed by the size of the dtv; dtv size increasing from 0 to 1 perturbed both TLS arch types, and the TLS-above-TP type's layout was perturbed by the specific number of dtv slots (libraries with TLS). this behavior made it virtually impossible to setup a tentative thread pointer address before loading libraries and keep it unchanged as long as the libraries' TLS size/alignment requirements fit. the new code fixes the location of the dtv and pthread structure at opposite ends of the static TLS block so that they will not move unless size or alignment changes.
2015-04-13allow i386 __set_thread_area to be called more than onceRich Felker1-1/+5
previously a new GDT slot was requested, even if one had already been obtained by a previous call. instead extract the old slot number from GS and reuse it if it was already set. the formula (GS-3)/8 for the slot number automatically yields -1 (request for new slot) if GS is zero (unset).
2015-04-13dynamic linker bootstrap overhaulRich Felker14-442/+338
this overhaul further reduces the amount of arch-specific code needed by the dynamic linker and removes a number of assumptions, including: - that symbolic function references inside libc are bound at link time via the linker option -Bsymbolic-functions. - that libc functions used by the dynamic linker do not require access to data symbols. - that static/internal function calls and data accesses can be made without performing any relocations, or that arch-specific startup code handled any such relocations needed. removing these assumptions paves the way for allowing libc.so itself to be built with stack protector (among other things), and is achieved by a three-stage bootstrap process: 1. relative relocations are processed with a flat function. 2. symbolic relocations are processed with no external calls/data. 3. main program and dependency libs are processed with a fully-functional libc/ldso. reduction in arch-specific code is achived through the following: - crt_arch.h, used for generating crt1.o, now provides the entry point for the dynamic linker too. - asm is no longer responsible for skipping the beginning of argv[] when ldso is invoked as a command. - the functionality previously provided by __reloc_self for heavily GOT-dependent RISC archs is now the arch-agnostic stage-1. - arch-specific relocation type codes are mapped directly as macros rather than via an inline translation function/switch statement.
2015-04-11remove mismatched arguments from vmlock function definitionsRich Felker1-2/+2
commit f08ab9e61a147630497198fe3239149275c0a3f4 introduced these accidentally as remnants of some work I tried that did not work out.
2015-04-10apply vmlock wait to __unmapself in pthread_exitRich Felker1-0/+4
2015-04-10redesign and simplify vmlock systemRich Felker8-45/+29
this global lock allows certain unlock-type primitives to exclude mmap/munmap operations which could change the identity of virtual addresses while references to them still exist. the original design mistakenly assumed mmap/munmap would conversely need to exclude the same operations which exclude mmap/munmap, so the vmlock was implemented as a sort of 'symmetric recursive rwlock'. this turned out to be unnecessary. commit 25d12fc0fc51f1fae0f85b4649a6463eb805aa8f already shortened the interval during which mmap/munmap held their side of the lock, but left the inappropriate lock design and some inefficiency. the new design uses a separate function, __vm_wait, which does not hold any lock itself and only waits for lock users which were already present when it was called to release the lock. this is sufficient because of the way operations that need to be excluded are sequenced: the "unlock-type" operations using the vmlock need only block mmap/munmap operations that are precipitated by (and thus sequenced after) the atomic-unlock they perform while holding the vmlock. this allows for a spectacular lack of synchronization in the __vm_wait function itself.
2015-04-10optimize out setting up robust list with kernel when not neededRich Felker4-7/+8
as a result of commit 12e1e324683a1d381b7f15dd36c99b37dd44d940, kernel processing of the robust list is only needed for process-shared mutexes. previously the first attempt to lock any owner-tracked mutex resulted in robust list initialization and a set_robust_list syscall. this is no longer necessary, and since the kernel's record of the robust list must now be cleared at thread exit time for detached threads, optimizing it out is more worthwhile than before too.
2015-04-10process robust list in pthread_exit to fix detached thread use-after-unmapRich Felker2-26/+27
the robust list head lies in the thread structure, which is unmapped before exit for detached threads. this leaves the kernel unable to process the exiting thread's robust list, and with a dangling pointer which may happen to point to new unrelated data at the time the kernel processes it. userspace processing of the robust list was already needed for non-pshared robust mutexes in order to perform private futex wakes rather than the shared ones the kernel would do, but it was conditional on linking pthread_mutexattr_setrobust and did not bother processing the pshared mutexes in the list, which requires additional logic for the robust list pending slot in case pthread_exit is interrupted by asynchronous process termination. the new robust list processing code is linked unconditionally (inlined in pthread_exit), handles both private and shared mutexes, and also removes the kernel's reference to the robust list before unmapping and exit if the exiting thread is detached.
2015-04-04fix getdelim to set the error indicator on all failuresSzabolcs Nagy1-2/+5
2015-04-04fix rpath string memory leak on failed dlopenRich Felker1-0/+2
when dlopen fails, all partially-loaded libraries need to be unmapped and freed. any of these libraries using an rpath with $ORIGIN expansion may have an allocated string for the expanded rpath; previously, this string was not freed when freeing the library data structures.
2015-04-03halt dynamic linker library search on errors resolving $ORIGIN in rpathRich Felker1-8/+18
this change hardens the dynamic linker against the possibility of loading the wrong library due to inability to expand $ORIGIN in rpath. hard failures such as excessively long paths or absence of /proc (when resolving /proc/self/exe for the main executable's origin) do not stop the path search, but memory allocation failures and any other potentially transient failures do. to implement this change, the meaning of the return value of fixup_rpath function is changed. returning zero no longer indicates that the dso's rpath string pointer is non-null; instead, the caller needs to check. a return value of -1 indicates a failure that should stop further path search.
2015-04-01harden dynamic linker library path searchRich Felker1-5/+16
transient errors during the path search should not allow the search to continue and possibly open the wrong file. this patch eliminates most conditions where that could happen, but there is still a possibility that $ORIGIN-based rpath processing will have an allocation failure, causing the search to skip such a path. fixing this is left as a separate task. a small bug where overly-long path components caused an infinite loop rather than being skipped/ignored is also fixed.
2015-03-27regex: fix character class repetitionsSzabolcs Nagy1-0/+5
Internally regcomp needs to copy some iteration nodes before translating the AST into TNFA representation. Literal nodes were not copied correctly: the class type and list of negated class types were not copied so classes were ignored (in the non-negated case an ignored char class caused the literal to match everything). This affects iterations when the upper bound is finite, larger than one or the lower bound is larger than one. So eg. the EREs [[:digit:]]{2} [^[:space:]ab]{1,4} were treated as .{2} [^ab]{1,4} The fix is done with minimal source modification to copy the necessary fields, but the AST preparation and node handling code of tre will need to be cleaned up for clarity.
2015-03-23do not treat \0 as a backref in BRESzabolcs Nagy1-1/+1
The valid BRE backref tokens are \1 .. \9, and 0 is not a special character either so \0 is undefined by the standard. Such undefined escaped characters are treated as literal characters currently, following existing practice, so \0 is the same as 0.
2015-03-23fix internal buffer overrun in inet_ptonRich Felker1-2/+3
one stop condition for parsing abbreviated ipv6 addressed was missed, allowing the internal ip[] buffer to overflow. this patch adds the missing stop condition and masks the array index so that, in case there are any remaining stop conditions missing, overflowing the buffer is not possible.
2015-03-20suppress backref processing in ERE regcompRich Felker1-1/+1
one of the features of ERE is that it's actually a regular language and does not admit expressions which cannot be matched in linear time. introduction of \n backref support into regcomp's ERE parsing was unintentional.
2015-03-20fix memory-corruption in regcomp with backslash followed by high byteRich Felker1-1/+1
the regex parser handles the (undefined) case of an unexpected byte following a backslash as a literal. however, instead of correctly decoding a character, it was treating the byte value itself as a character. this was not only semantically unjustified, but turned out to be dangerous on archs where plain char is signed: bytes in the range 252-255 alias the internal codes -4 through -1 used for special types of literal nodes in the AST.
2015-03-16block all signals (even internal ones) in cancellation signal handlerRich Felker1-1/+2
previously the implementation-internal signal used for multithreaded set*id operations was left unblocked during handling of the cancellation signal. however, on some archs, signal contexts are huge (up to 5k) and the possibility of nested signal handlers drastically increases the minimum stack requirement. since the cancellation signal handler will do its job and return in bounded time before possibly passing execution to application code, there is no need to allow other signals to interrupt it.
2015-03-15avoid sending huge names as nscd passwd/group queriesRich Felker1-2/+3
overly long user/group names are potentially a DoS vector and source of other problems like partial writes by sendmsg, and not useful.
2015-03-15simplify nscd lookup code for alt passwd/group backendsRich Felker4-15/+15
previously, a sentinel value of (FILE *)-1 was used to inform the caller of __nscd_query that nscd is not in use. aside from being an ugly hack, this resulted in duplicate code paths for two logically equivalent cases: no nscd, and "not found" result from nscd. now, __nscd_query simply skips closing the socket and returns a valid FILE pointer when nscd is not in use, and produces a fake "not found" response header. the caller is then responsible for closing the socket just like it would do if it had gotten a real "not found" response.
2015-03-15add alternate backend support for getgrouplistJosiah Worcester3-24/+86
This completes the alternate backend support that was previously added to the getpw* and getgr* functions. Unlike those, though, it unconditionally queries nscd. Any groups from nscd that aren't in the /etc/groups file are added to the returned list, and any that are present in the file are ignored. The purpose of this behavior is to provide a view of the group database consistent with what is observed by the getgr* functions. If group memberships reported by nscd were honored when the corresponding group already has a definition in the /etc/groups file, the user's getgrouplist-based membership in the group would conflict with their non-membership in the reported gr_mem[] for the group. The changes made also make getgrouplist thread-safe and eliminate its clobbering of the global getgrent state.
2015-03-11add aarch64 portSzabolcs Nagy17-0/+363
This adds complete aarch64 target support including bigendian subarch. Some of the long double math functions are known to be broken otherwise interfaces should be fully functional, but at this point consider this port experimental. Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
2015-03-11math: add dummy implementations of 128 bit long double functionsSzabolcs Nagy17-4/+111
This is in preparation for the aarch64 port only to have the long double math symbols available on ld128 platforms. The implementations should be fixed up later once we have proper tests for these functions. Added bigendian handling for ld128 bit manipulations too.
2015-03-11math: add ld128 exp2l based on the freebsd implementationSzabolcs Nagy1-1/+366
Changed the special case handling and bit manipulation to better match the double version.
2015-03-11copy the dtv pointer to the end of the pthread struct for TLS_ABOVE_TP archsSzabolcs Nagy3-4/+5
There are two main abi variants for thread local storage layout: (1) TLS is above the thread pointer at a fixed offset and the pthread struct is below that. So the end of the struct is at known offset. (2) the thread pointer points to the pthread struct and TLS starts below it. So the start of the struct is at known (zero) offset. Assembly code for the dynamic TLSDESC callback needs to access the dynamic thread vector (dtv) pointer which is currently at the front of the pthread struct. So in case of (1) the asm code needs to hard code the offset from the end of the struct which can easily break if the struct changes. This commit adds a copy of the dtv at the end of the struct. New members must not be added after dtv_copy, only before it. The size of the struct is increased a bit, but there is opportunity for size optimizations.
2015-03-07fix regression in pthread_cond_wait with cancellation disabledRich Felker1-0/+1
due to a logic error in the use of masked cancellation mode, pthread_cond_wait did not honor PTHREAD_CANCEL_DISABLE but instead failed with ECANCELED when cancellation was pending.
2015-03-07fix FLT_ROUNDS to reflect the current rounding modeSzabolcs Nagy1-0/+19
Implemented as a wrapper around fegetround introducing a new function to the ABI: __flt_rounds. (fegetround cannot be used directly from float.h)
2015-03-06fix over-alignment of TLS, insufficient builtin TLS on 64-bit archsRich Felker2-4/+16
a conservative estimate of 4*sizeof(size_t) was used as the minimum alignment for thread-local storage, despite the only requirements being alignment suitable for struct pthread and void* (which struct pthread already contains). additional alignment required by the application or libraries is encoded in their headers and is already applied. over-alignment prevented the builtin_tls array from ever being used in dynamic-linked programs on 64-bit archs, thereby requiring allocation at startup even in programs with no TLS of their own.
2015-03-04add legacy functions from sysinfo.h duplicating sysconf functionalityRich Felker1-0/+22
2015-03-04fix signed left-shift overflow in pthread_condattr_setpsharedRich Felker1-1/+1
2015-03-04remove useless check of bin match in mallocRich Felker1-1/+1
this re-check idiom seems to have been copied from the alloc_fwd and alloc_rev functions, which guess a bin based on non-synchronized memory access to adjacent chunk headers then need to confirm, after locking the bin, that the chunk is actually in the bin they locked. the check being removed, however, was being performed on a chunk obtained from the already-locked bin. there is no race to account for here; the check could only fail in the event of corrupt free lists, and even then it would not catch them but simply continue running. since the bin_index function is mildly expensive, it seems preferable to remove the check rather than trying to convert it into a useful consistency check. casual testing shows a 1-5% reduction in run time.
2015-03-04eliminate atomics in syslog setlogmask functionRich Felker1-4/+6
2015-03-04fix init race that could lead to deadlock in malloc init codeRich Felker1-39/+14
the malloc init code provided its own version of pthread_once type logic, including the exact same bug that was fixed in pthread_once in commit 0d0c2f40344640a2a6942dda156509593f51db5d. since this code is called adjacent to expand_heap, which takes a lock, there is no reason to have pthread_once-type initialization. simply moving the init code into the interval where expand_heap already holds its lock on the brk achieves the same result with much less synchronization logic, and allows the buggy code to be eliminated rather than just fixed.
2015-03-03make all objects used with atomic operations volatileRich Felker25-57/+60
the memory model we use internally for atomics permits plain loads of values which may be subject to concurrent modification without requiring that a special load function be used. since a compiler is free to make transformations that alter the number of loads or the way in which loads are performed, the compiler is theoretically free to break this usage. the most obvious concern is with atomic cas constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be transformed to a_cas(p,*p,f(*p)); where the latter is intended to show multiple loads of *p whose resulting values might fail to be equal; this would break the atomicity of the whole operation. but even more fundamental breakage is possible. with the changes being made now, objects that may be modified by atomics are modeled as volatile, and the atomic operations performed on them by other threads are modeled as asynchronous stores by hardware which happens to be acting on the request of another thread. such modeling of course does not itself address memory synchronization between cores/cpus, but that aspect was already handled. this all seems less than ideal, but it's the best we can do without mandating a C11 compiler and using the C11 model for atomics. in the case of pthread_once_t, the ABI type of the underlying object is not volatile-qualified. so we are assuming that accessing the object through a volatile-qualified lvalue via casts yields volatile access semantics. the language of the C standard is somewhat unclear on this matter, but this is an assumption the linux kernel also makes, and seems to be the correct interpretation of the standard.
2015-03-02suppress masked cancellation in pthread_joinRich Felker1-1/+5
like close, pthread_join is a resource-deallocation function which is also a cancellation point. the intent of masked cancellation mode is to exempt such functions from failure with ECANCELED.
2015-03-02fix namespace issue in pthread_join affecting thrd_joinRich Felker1-1/+2
pthread_testcancel is not in the ISO C reserved namespace and thus cannot be used here. use the namespace-protected version of the function instead.