Age | Commit message (Collapse) | Author | Files | Lines |
|
the C standard specifies that setjmp is a macro, but longjmp is a
normal function. a macro version of it would be permitted (albeit
useless) for C (not C++), but would have to be a function-like macro,
not an object-like one.
|
|
transient errors during the path search should not allow the search to
continue and possibly open the wrong file. this patch eliminates most
conditions where that could happen, but there is still a possibility
that $ORIGIN-based rpath processing will have an allocation failure,
causing the search to skip such a path. fixing this is left as a
separate task.
a small bug where overly-long path components caused an infinite loop
rather than being skipped/ignored is also fixed.
|
|
while it's the same for all presently supported archs, it differs at
least on sparc, and conceptually it's no less arch-specific than the
other O_* macros. O_SEARCH and O_EXEC are still defined in terms of
O_PATH in the main fcntl.h.
|
|
|
|
POSIX requires the sem_nsems member to have type unsigned short. we
have to work around the incorrect kernel type using matching
endian-specific padding.
|
|
The shm_info struct is a gnu extension and some of its members do
not have shm* prefix. This is worked around in sys/shm.h by macros,
but aarch64 didn't use those.
|
|
|
|
Internally regcomp needs to copy some iteration nodes before
translating the AST into TNFA representation.
Literal nodes were not copied correctly: the class type and list
of negated class types were not copied so classes were ignored
(in the non-negated case an ignored char class caused the literal
to match everything).
This affects iterations when the upper bound is finite, larger
than one or the lower bound is larger than one. So eg. the EREs
[[:digit:]]{2}
[^[:space:]ab]{1,4}
were treated as
.{2}
[^ab]{1,4}
The fix is done with minimal source modification to copy the
necessary fields, but the AST preparation and node handling
code of tre will need to be cleaned up for clarity.
|
|
The valid BRE backref tokens are \1 .. \9, and 0 is not a special
character either so \0 is undefined by the standard.
Such undefined escaped characters are treated as literal characters
currently, following existing practice, so \0 is the same as 0.
|
|
commit 559de8f5f06da9022cbba70e22e14a710eb74513 redefined FLT_ROUNDS
to use an external function that can report the actual current
rounding mode, rather than always reporting round-to-nearest. however,
float.h did not include 'extern "C"' wrapping for C++, so C++ programs
using FLT_ROUNDS ended up with an unresolved reference to a
name-mangled C++ function __flt_rounds.
|
|
one stop condition for parsing abbreviated ipv6 addressed was missed,
allowing the internal ip[] buffer to overflow. this patch adds the
missing stop condition and masks the array index so that, in case
there are any remaining stop conditions missing, overflowing the
buffer is not possible.
|
|
one of the features of ERE is that it's actually a regular language
and does not admit expressions which cannot be matched in linear time.
introduction of \n backref support into regcomp's ERE parsing was
unintentional.
|
|
the regex parser handles the (undefined) case of an unexpected byte
following a backslash as a literal. however, instead of correctly
decoding a character, it was treating the byte value itself as a
character. this was not only semantically unjustified, but turned out
to be dangerous on archs where plain char is signed: bytes in the
range 252-255 alias the internal codes -4 through -1 used for special
types of literal nodes in the AST.
|
|
|
|
|
|
the previous values (2k min and 8k default) were too small for some
archs. aarch64 reserves 4k in the signal context for future extensions
and requires about 4.5k total, and powerpc reportedly uses over 2k.
the new minimums are chosen to fit the saved context and also allow a
minimal signal handler to run.
since the default (SIGSTKSZ) has always been 6k larger than the
minimum, it is also increased to maintain the 6k usable by the signal
handler. this happens to be able to store one pathname buffer and
should be sufficient for calling any function in libc that doesn't
involve conversion between floating point and decimal representations.
x86 (both 32-bit and 64-bit variants) may also need a larger minimum
(around 2.5k) in the future to support avx-512, but the values on
these archs are left alone for now pending further analysis.
the value for PTHREAD_STACK_MIN is not increased to match MINSIGSTKSZ
at this time. this is so as not to preclude applications from using
extremely small thread stacks when they know they will not be handling
signals. unfortunately cancellation and multi-threaded set*id() use
signals as an implementation detail and therefore require a stack
large enough for a signal context, so applications which use extremely
small thread stacks may still need to avoid using these features.
|
|
previously the implementation-internal signal used for multithreaded
set*id operations was left unblocked during handling of the
cancellation signal. however, on some archs, signal contexts are huge
(up to 5k) and the possibility of nested signal handlers drastically
increases the minimum stack requirement. since the cancellation signal
handler will do its job and return in bounded time before possibly
passing execution to application code, there is no need to allow other
signals to interrupt it.
|
|
these additions were made based on scanning commit authors since the
last update, at the time of the 1.1.4 release.
|
|
overly long user/group names are potentially a DoS vector and source
of other problems like partial writes by sendmsg, and not useful.
|
|
previously, a sentinel value of (FILE *)-1 was used to inform the
caller of __nscd_query that nscd is not in use. aside from being an
ugly hack, this resulted in duplicate code paths for two logically
equivalent cases: no nscd, and "not found" result from nscd.
now, __nscd_query simply skips closing the socket and returns a valid
FILE pointer when nscd is not in use, and produces a fake "not found"
response header. the caller is then responsible for closing the socket
just like it would do if it had gotten a real "not found" response.
|
|
This completes the alternate backend support that was previously added
to the getpw* and getgr* functions. Unlike those, though, it
unconditionally queries nscd. Any groups from nscd that aren't in the
/etc/groups file are added to the returned list, and any that are
present in the file are ignored. The purpose of this behavior is to
provide a view of the group database consistent with what is observed
by the getgr* functions. If group memberships reported by nscd were
honored when the corresponding group already has a definition in the
/etc/groups file, the user's getgrouplist-based membership in the
group would conflict with their non-membership in the reported
gr_mem[] for the group.
The changes made also make getgrouplist thread-safe and eliminate its
clobbering of the global getgrent state.
|
|
|
|
The unwind code in libgcc uses this type for unwinding across signal
handlers. On aarch64 the kernel may place a sequence of structs on the
signal stack on top of the ucontext to provide additional information.
The unwinder only needs the header, but added all the types the kernel
currently defines for this mechanism because they are part of the uapi.
|
|
previously, commit e7b9887e8b65253087ab0b209dc8dd85c9f09614 aligned
the sizes with the glibc ABI. subsequent discussion during the merge
of the aarch64 port reached a conclusion that we should reject larger
arch-specific sizes, which have significant cost and no benefit, and
stick with the existing common 32-bit sizes for all 32-bit/ILP32 archs
and the x86_64 sizes for 64-bit archs.
one peculiarity of this change is that x32 pthread_attr_t is now
larger in musl than in the glibc x32 ABI, making it unsafe to call
pthread_attr_init from x32 code that was compiled against glibc. with
all the ABI issues of x32, it's not clear that ABI compatibility will
ever work, but if it's needed, pthread_attr_init and related functions
could be modified not to write to the last slot of the object.
this is not a regression versus previous releases, since on previous
releases the x32 pthread type sizes were all severely oversized
already (due to incorrectly using the x86_64 LP64 definitions).
moreover, x32 is still considered experimental and not ABI-stable.
|
|
This adds complete aarch64 target support including bigendian subarch.
Some of the long double math functions are known to be broken otherwise
interfaces should be fully functional, but at this point consider this
port experimental.
Initial work on this port was done by Sireesh Tripurari and Kevin Bortis.
|
|
This is in preparation for the aarch64 port only to have the long
double math symbols available on ld128 platforms. The implementations
should be fixed up later once we have proper tests for these functions.
Added bigendian handling for ld128 bit manipulations too.
|
|
Changed the special case handling and bit manipulation to better
match the double version.
|
|
There are two main abi variants for thread local storage layout:
(1) TLS is above the thread pointer at a fixed offset and the pthread
struct is below that. So the end of the struct is at known offset.
(2) the thread pointer points to the pthread struct and TLS starts
below it. So the start of the struct is at known (zero) offset.
Assembly code for the dynamic TLSDESC callback needs to access the
dynamic thread vector (dtv) pointer which is currently at the front
of the pthread struct. So in case of (1) the asm code needs to hard
code the offset from the end of the struct which can easily break if
the struct changes.
This commit adds a copy of the dtv at the end of the struct. New members
must not be added after dtv_copy, only before it. The size of the struct
is increased a bit, but there is opportunity for size optimizations.
|
|
due to a logic error in the use of masked cancellation mode,
pthread_cond_wait did not honor PTHREAD_CANCEL_DISABLE but instead
failed with ECANCELED when cancellation was pending.
|
|
Implemented as a wrapper around fegetround introducing a new function
to the ABI: __flt_rounds. (fegetround cannot be used directly from float.h)
|
|
a conservative estimate of 4*sizeof(size_t) was used as the minimum
alignment for thread-local storage, despite the only requirements
being alignment suitable for struct pthread and void* (which struct
pthread already contains). additional alignment required by the
application or libraries is encoded in their headers and is already
applied.
over-alignment prevented the builtin_tls array from ever being used in
dynamic-linked programs on 64-bit archs, thereby requiring allocation
at startup even in programs with no TLS of their own.
|
|
|
|
|
|
new in linux v3.19 commit ee1b58d36aa1b5a79eaba11f5c3633c88231da83
used to report intel mpx bound violation information.
|
|
normally time.h would provide a definition for this struct, but
depending on the feature test macros in use, it may not be exposed,
leading to warnings when it's used in the function prototypes.
|
|
|
|
these macros have the same distinct definition on blackfin, frv, m68k,
mips, sparc and xtensa kernels. POLLMSG and POLLRDHUP additionally
differ on sparc.
|
|
the previous definitions were copied from x86_64. not only did they
fail to match the ABI sizes; they also wrongly encoded an assumption
that long/pointer types are twice as large as int.
|
|
this re-check idiom seems to have been copied from the alloc_fwd and
alloc_rev functions, which guess a bin based on non-synchronized
memory access to adjacent chunk headers then need to confirm, after
locking the bin, that the chunk is actually in the bin they locked.
the check being removed, however, was being performed on a chunk
obtained from the already-locked bin. there is no race to account for
here; the check could only fail in the event of corrupt free lists,
and even then it would not catch them but simply continue running.
since the bin_index function is mildly expensive, it seems preferable
to remove the check rather than trying to convert it into a useful
consistency check. casual testing shows a 1-5% reduction in run time.
|
|
|
|
the malloc init code provided its own version of pthread_once type
logic, including the exact same bug that was fixed in pthread_once in
commit 0d0c2f40344640a2a6942dda156509593f51db5d.
since this code is called adjacent to expand_heap, which takes a lock,
there is no reason to have pthread_once-type initialization. simply
moving the init code into the interval where expand_heap already holds
its lock on the brk achieves the same result with much less
synchronization logic, and allows the buggy code to be eliminated
rather than just fixed.
|
|
the memory model we use internally for atomics permits plain loads of
values which may be subject to concurrent modification without
requiring that a special load function be used. since a compiler is
free to make transformations that alter the number of loads or the way
in which loads are performed, the compiler is theoretically free to
break this usage. the most obvious concern is with atomic cas
constructs: something of the form tmp=*p;a_cas(p,tmp,f(tmp)); could be
transformed to a_cas(p,*p,f(*p)); where the latter is intended to show
multiple loads of *p whose resulting values might fail to be equal;
this would break the atomicity of the whole operation. but even more
fundamental breakage is possible.
with the changes being made now, objects that may be modified by
atomics are modeled as volatile, and the atomic operations performed
on them by other threads are modeled as asynchronous stores by
hardware which happens to be acting on the request of another thread.
such modeling of course does not itself address memory synchronization
between cores/cpus, but that aspect was already handled. this all
seems less than ideal, but it's the best we can do without mandating a
C11 compiler and using the C11 model for atomics.
in the case of pthread_once_t, the ABI type of the underlying object
is not volatile-qualified. so we are assuming that accessing the
object through a volatile-qualified lvalue via casts yields volatile
access semantics. the language of the C standard is somewhat unclear
on this matter, but this is an assumption the linux kernel also makes,
and seems to be the correct interpretation of the standard.
|
|
like close, pthread_join is a resource-deallocation function which is
also a cancellation point. the intent of masked cancellation mode is
to exempt such functions from failure with ECANCELED.
|
|
pthread_testcancel is not in the ISO C reserved namespace and thus
cannot be used here. use the namespace-protected version of the
function instead.
|
|
|
|
previously, the __timedwait function was optionally a cancellation
point depending on whether it was passed a pointer to a cleaup
function and context to register. as of now, only one caller actually
used such a cleanup function (and it may face removal soon); most
callers either passed a null pointer to disable cancellation or a
dummy cleanup function.
now, __timedwait is never a cancellation point, and __timedwait_cp is
the cancellable version. this makes the intent of the calling code
more obvious and avoids ugly dummy functions and long argument lists.
|
|
as part of abstracting the futex wait, this function suppresses all
futex error values which callers should not see using a whitelist
approach. when the masked cancellation mode was added, the new
ECANCELED error was not whitelisted. this omission caused the new
pthread_cond_wait code using masked cancellation to exhibit a spurious
wake (rather than acting on cancellation) when the request arrived
after blocking on the cond var.
|
|
on most cpu models, "rep stosq" has high overhead that makes it
undesirable for small memset sizes. the new code extends the
minimal-branch fast path for short memsets from size 15 up to size
126, and shrink-wraps this code path. in addition, "rep stosq" is
sensitive to misalignment. the cost varies with size and with cpu
model, but it has been observed performing 1.5 times slower when the
destination address is not aligned mod 16. the new code thus ensures
alignment mod 16, but also preserves any existing additional
alignment, in case there are cpu models where it is beneficial.
this version is based in part on changes proposed by Denys Vlasenko.
|
|
on most cpu models, "rep stosl" has high overhead that makes it
undesirable for small memset sizes. the new code extends the
minimal-branch fast path for short memsets from size 15 up to size 62,
and shrink-wraps this code path. in addition, "rep stosl" is very
sensitive to misalignment. the cost varies with size and with cpu
model, but it has been observed performing 1.5 to 4 times slower when
the destination address is not aligned mod 16. the new code thus
ensures alignment mod 16, but also preserves any existing additional
alignment, in case there are cpu models where it is beneficial.
this version is based in part on changes to the x86_64 memset asm
proposed by Denys Vlasenko.
|
|
Based on a patch by Szabolcs Nagy.
|