Age | Commit message (Collapse) | Author | Files | Lines |
|
linux fails the open with ENOSPC, but POSIX mandates EAGAIN.
|
|
commit 2b4fd6f75b4fa66d28cddcf165ad48e8fda486d1 added time64 for this
function, but did so with a hidden assumption that the new time64
version of struct timex will be layout-compatible with the old one.
however, there is little benefit to doing it that way, and the cost is
permanent special-casing of 32-bit archs with 64-bit time_t in the
public interface definitions.
instead, do a full translation of the structure going in and out. this
commit is actually a revision to an earlier uncommited version of the
code.
|
|
presently the kernel does not actually define time64 versions of these
syscalls, and they're not really needed except to represent extreme
cpu time usage. however, x32's versions of the syscalls already behave
as time64 ones, meaning the functions were broken on x32 if the caller
used any part of the rusage result other than ru_utime and ru_stime.
commit 7e8171143124f7f510db555dc6f6327a965a3e84 made it possible to
fix this by treating x32's syscalls as time64 versions.
in the non-time64-syscall case, make the syscall with the rusage
destination pointer adjusted so that all members but the timevals line
up between the libc and kernel structures. on 64-bit archs, or present
32-bit archs with 32-bit time_t, the timevals will line up too and no
further work is needed. for future 32-bit archs with 64-bit time_t,
the timevals are copied into place, contingent on time_t being larger
than long.
|
|
aside from the special value EOF, ungetc is specified to accept and
convert values outside the range of unsigned char. conversion takes
place automatically as part of assignment when storing into the
buffer, but the return value is also required to be the resulting
converted value, and this requirement was not satisfied.
simplified from patch by Wang Jianjian.
|
|
based on patch by Dan Gohman, who caught this via compiler warnings.
analysis by Szabolcs Nagy determined that it's a bug, whereby errno
can be set incorrectly for values where the coercion from long double
to double causes rounding. it seems likely that floating point status
flags may be set incorrectly as a result too.
at the same time, clean up use of preprocessor concatenation involving
LDBL_MANT_DIG, which spuriously depends on it being a single unadorned
decimal integer literal, and instead use the equivalent formulation
2/LDBL_EPSILON. an equivalent change on the printf side was made in
commit bff6095d915f3e41206e47ea2a570ecb937ef926.
|
|
SQRT.fmt exists on MIPS II+ (float), MIPS III+ (double).
ABS.fmt exists on MIPS I+ but only cores with ABS2008 flag in FCSR
implement the required behaviour.
|
|
|
|
Both sqrt and sqrtf shifted the signed exponent as signed int to adjust
the bit representation of the result. There are signed right shifts too
in the code but those are implementation defined and are expected to
compile to arithmetic shift on supported compilers and targets.
|
|
mbsrtowcs contains "vectorized" loops to quickly step over bytes
without the high bit set; these have undefined behavior by virtue of
aliasing uint32_t over top of char data for the accesses.
commit 4d0a82170a25464c39522d7190b9fe302045ddb2 fixed the
corresponding usage in string functions by using the may_alias
attribute conditional on __GNUC__ and disabled the vectorized code in
its absence. do the same for mbsrtowcs.
|
|
Some declarations of __tls_get_new were left in the code, even
though the definition got removed in
commit 9d44b6460ab603487dab4d916342d9ba4467e6b9
install dynamic tls synchronously at dlopen, streamline access
this can make the build fail with
ld: lib/libc.so: hidden symbol `__tls_get_new' isn't defined
when libc.so is linked without --gc-sections, because a .hidden
declaration in asm code creates a reference even if the symbol
is not actually used.
|
|
lrint in (LONG_MAX, 1/DBL_EPSILON) and in (-1/DBL_EPSILON, LONG_MIN)
is not trivial: rounding to int may be inexact, but the conversion to
int may overflow and then the inexact flag must not be raised. (the
overflow threshold is rounding mode dependent).
this matters on 32bit targets (without single instruction lrint or
rint), so the common case (when there is no overflow) is optimized by
inlining the lrint logic, otherwise the old code is kept as a fallback.
on my laptop an i486 lrint call is asm:10ns, old c:30ns, new c:21ns
on a smaller arm core: old c:71ns, new c:34ns
on a bigger arm core: old c:27ns, new c:19ns
|
|
mips32 has two fpu register file variants: FR=0 with 32 32-bit
registers, where pairs of neighboring even/odd registers are used to
represent doubles, and FR=1 with 32 64-bit registers, each of which
can store a single or double.
up through r5 (our "mips" arch), the supported ABI uses FR=0, but
modern compilers generate "fpxx" model code that can safely operate
with either model. r6, which is an incompatible but similar ISA, drops
FR=0 and only provides the FR=1 model. as such, setjmp and longjmp,
which depended on being able to save and restore call-saved doubles by
storing and loading their 32-bit halves, were completely broken in the
presence of floating point code on mips r6.
to fix this, use the s.d and l.d mnemonics to store and load fpu
registers. these expand to the existing swc1 and lwc1 instructions for
pairs of 32-bit fpu registers on mips1, but on mips2 and later they
translate directly to the 64-bit sdc1 and ldc1.
with FR=0, sdc1 and ldc1 behave just like the pairs of swc1 and lwc1
instructions they replace, storing or loading the even/odd pair of fpu
registers that can be treated as separate single-precision floats or
as a unit representing a double. but with FR=1, they store/load
individual 64-bit registers. this yields the ABI-correct behavior on
mips r6, and should make linking of pre-r6 (plain "mips") code with
"fp64" model code workable, although this is and will likely remain
unsupported usage.
in addition to the mips r6 problem this change fixes, reportedly
clang's internal assembler refuses to assemble swc1 and lwc1
instructions for odd register indices when building for "fpxx" model
(the default). this caused setjmp and longjmp not to build. by using
the s.d and l.d forms, this problem is avoided too.
as a bonus, code size is reduced everywhere but mips1.
|
|
armv8 removed the coprocessor instructions other than cp14, so
on an armv8 system the related hwcaps should never be set.
new llvm complains about the use of coprocessor instructions in
armv8-a mode (even though they are never executed at runtime),
so ifdef them out when musl is built for armv8.
|
|
in the timer thread start function, self->timer_id was accessed
without synchronization; the timer thread could fail to see the store
from the calling thread, resulting in timer_delete failing to delete
the correct kernel-level timer.
this fix is based on a patch by changdiankang, but with the load moved
to after receiving the timer_delete signal rather than just after the
start barrier, so as not to retain the possibility of data race with
timer_delete.
|
|
commit 8a544ee3a2a75af278145b09531177cab4939b41 introduced a
dependency of the failure path for explicit scheduling at thread
creation on __clone's handling of the start function returning, which
should result in SYS_exit.
as noted in commit 05870abeaac0588fb9115cfd11f96880a0af2108, the arm
version of __clone was broken in this case. in the past, the mips
version was also broken; it was fixed in commit
8b2b61e0001281be0dcd3dedc899bf187172fecb.
since this code path is pretty much entirely untested (previously only
reachable in applications that call the public clone() and return from
the start function) and consists of fragile per-arch asm, don't assume
it works, at least not until it's been thoroughly tested. instead make
the SYS_exit syscall from the start function's failure path.
|
|
commit cc3a4466605fe8dfc31f3b75779110ac93055bc1 fixed this for printf
but neglected to fix wprintf.
Previously, %lf caused a failure to output.
|
|
we don't actually support building asm source files as thumb1, but
it's possible that the condition __ARM_ARCH>=5 would be false on old
compilers that did not define __ARM_ARCH at all. avoiding that would
require enumerating all of the possible __ARM_ARCH_*__ macros for
testing.
as noted in commit 05870abeaac0588fb9115cfd11f96880a0af2108, mov lr,pc
is not valid for saving a return address when in thumb mode. since
this code is a hot path (dynamic TLS access), don't do the out-of-line
bl->bx chaining to save the return value; instead, use the fact that
this file is preprocessed asm to add the missing thumb bit with an add
in place of the mov.
the change here does not affect builds for ISA levels new enough to
have a thread pointer read instruction, or for armv5 and later as long
as the compiler properly defines __ARM_ARCH, or for any build as arm
(not thumb) code. it's likely that it makes no difference whatsoever
to any present-day practical build environments, but nonetheless now
it's safe.
as an alternative, we could just assume __thumb__ implies availability
of blx since we don't support building asm source files as thumb1. I
didn't do that in order to avoid having a wrong assumption here if
that ever changes.
|
|
as noted in commit 05870abeaac0588fb9115cfd11f96880a0af2108, mov lr,pc
is not a valid method for saving the return address in code that might
be built as thumb.
this one is unlikely to matter, since any ISA level that has thumb2
should also have native implementations of atomics that don't involve
kuser_helper, and the affected code is only used on very old kernels
to begin with.
|
|
mov lr,pc is not a valid way to save the return address in thumb mode
since it omits the thumb bit. use a chain of bl and bx to emulate blx.
this could be avoided by converting to a .S file with preprocessor
conditions to use blx if available, but the time cost here is
dominated by the syscall anyway.
while making this change, also remove the remnants of support for
pre-bx ISA levels. commit 9f290a49bf9ee247d540d3c83875288a7991699c
removed the hack from the parent code paths, but left the unnecessary
code in the child. keeping it would require rewriting two code paths
rather than one, and is useless for reasons described in that commit.
|
|
previously, when pthread_create failed due to inability to set
explicit scheduling according to the requested attributes, the nascent
thread was detached and made responsible for its own cleanup via the
standard pthread_exit code path. this left it consuming resources
potentially well after pthread_create returned, in a way that the
application could not see or mitigate, and unnecessarily exposed its
existence to the rest of the implementation via the global thread
list.
instead, attempt explicit scheduling early and reuse the failure path
for __clone failure if it fails. the nascent thread's exit futex is
not needed for unlocking the thread list, since the thread calling
pthread_create holds the thread list lock the whole time, so it can be
repurposed to ensure the thread has finished exiting. no pthread_exit
is needed, and freeing the stack, if needed, can happen just as it
would if __clone failed.
|
|
if setting scheduling properties succeeds, the new thread may end up
with lower priority than the caller, and may be unable to continue
running due to another intermediate-priority thread. this produces a
priority inversion situation for the thread calling pthread_create,
since it cannot return until the new thread reports success.
originally, the parent was responsible for setting the new thread's
priority; commits b8742f32602add243ee2ce74d804015463726899 and
40bae2d32fd6f3ffea437fa745ad38a1fe77b27e changed it as part of
trimming down the pthread structure. since then, commit
04335d9260c076cf4d9264bd93dd3b06c237a639 partly reversed the changes,
but did not switch responsibilities back. do that now.
|
|
commit 8f11e6127fe93093f81a52b15bb1537edc3fc8af wrongly documented
that all changes to libc.threads_minus_1 were guarded by the thread
list lock, but the decrement for failed SYS_clone took place after the
thread list lock was released.
|
|
commit 030e52639248ac8417a4934298caa78c21a228d1 added optreset, a BSD
extension to getopt duplicating the functionality (also an extension)
of setting optind to 0, but failed to provide a public declaration for
it. according to the BSD documentation and headers, the application is
not supposed to need to provide its own declaration.
|
|
these are presently extensions, thus named with _np to match glibc and
other implementations that provide them; however they are likely to be
standardized in the future without the _np suffix as a result of
Austin Group issue 1208. if so, both names will be kept as aliases.
|
|
|
|
commit 9b14ad541068d4f7d0be9bcd1ff4c70090d868d3 introduced this
namespace violation.
|
|
commit 7590203c486d9002522019045d34ee3dee0a66f5 omitted static here.
|
|
R_PPC_UADDR32 (R_PPC64_UADDR64) has the same meaning as R_PPC_ADDR32
(R_PPC64_ADDR64), except that its address need not be aligned. For
powerpc64, BFD ld(1) will automatically convert between ADDR<->UADDR
relocations when the address is/isn't at its native alignment. This
will happen if, for example, there is a pointer in a packed struct.
gold and lld do not currently generate R_PPC64_UADDR64, but pass
through misaligned R_PPC64_ADDR64 relocations from object files,
possibly relaxing them to misaligned R_PPC64_RELATIVE. In both cases
(relaxed or not) this violates the PSABI, which defines the relevant
field type as "a 64-bit field occupying 8 bytes, the alignment of
which is 8 bytes unless otherwise specified."
All three linkers violate the PSABI on 32-bit powerpc, where the only
difference is that the field is 32 bits wide, aligned to 4 bytes.
Currently musl fails to load executables linked by BFD ld containing
R_PPC64_UADDR64, with the error "unsupported relocation type 43".
This change provides compatibility with BFD ld on powerpc64, and any
static linker on either architecture that starts following the PSABI
more closely.
|
|
This function is a GNU extension introduced in glibc 2.17.
|
|
POSIX allows a null pointer, in which case the function only checks
the validity of the clock id argument.
|
|
at the point of this check, the pointer has already been dereferenced.
clock_settime is not defined for null pointer arguments.
|
|
somewhat analogous to commit d0b547dfb5f7678cab6bc39dd736ed6454357ca4,
but here the omission of the null timeout check was in the time64
syscall code path. this code is not yet used except on x32.
|
|
these accept the netbsd/openbsd message catalog file format,
consisting of a sorted list of set headers and a sorted list of
message headers for each set, admitting trivial binary search for
lookups.
the gnu format was not chosen because it's unusably bad. it does not
admit efficient (log time or better) lookups; rather, it requires
linear search or hash table lookups, and the hash function is awful:
it's literally set_id*msg_id.
|
|
commit 722a1ae3351a03ab25010dbebd492eced664853b inadvertently passed a
copy of {s,us} to the syscall even if the timeout argument tv was
null, thereby causing immediate timeout (polling) in place of
unlimited timeout. only archs using SYS_select were affected.
|
|
when the pattern ended with one or more literal path components, or
when the GLOB_MARK flag was passed to request that glob flag directory
results and the type obtained by readdir was unknown or inconclusive
(symlink), the stat function was called to evaluate existence and/or
determine type. however, stat fails with ENOENT for broken symlinks,
and this caused the match to be omitted from the results.
instead, use stat only for the unknown/inconclusive cases with
GLOB_MARK, and otherwise, or if stat fails, use lstat existence still
needs to be determined. this minimizes the number of costly syscalls,
performing both only in the case where GLOB_MARK is in use and there
is a final literal path component which is a broken symlink.
based on/simplified from patch by James Y Knight.
|
|
The only reason we needed to preserve the link register was because we
were using a branch-link instruction to branch to __cp_cancel.
Replacing this with a branch means we can avoid the save/restore as
the link register is no longer modified.
|
|
|
|
otherwise alarm will break on 32-bit archs when time_t is changed to
64-bit. a second itimerval object is introduced for retrieving the old
value, since the setitimer function has restrict-qualified arguments.
|
|
commit f3ed8bfe8a82af1870ddc8696ed4cc1d5aa6b441 inadvertently removed
labels that were still needed.
|
|
commit 31c5fb80b9eae86f801be4f46025bc6532a554c5 introduced underflow
code paths for the i386 math asm, along with checks on the fpu status
word to skip the underflow-generation instructions if the underflow
flag was already raised. unfortunately, at least one such path, in
log1p, returned with 2 items on the x87 stack rather than just 1 item
for the return value. this is a violation of the ABI's calling
convention, and could cause subsequent floating point code to produce
NANs due to x87 stack overflow. if floating point results are used in
flow control, this can lead to runaway wrong code execution.
rather than reviewing each "underflow already raised" code path for
correctness, remove them all. they're likely slower than just
performing the underflow code unconditionally, and significantly more
complex.
all of this code should be ripped out and replaced by C source files
with inline asm. doing so would preclude this kind of error by having
the compiler perform all x87 stack register allocation and stack
manipulation, and would produce comparable or better code. however
such a change is a much larger project.
|
|
commit 72f50245d018af0c31b38dec83c557a4e5dd1ea8 broke this by creating
a code path where r is uninitialized.
|
|
this fixes a major upcoming performance regression introduced by
commit 72f50245d018af0c31b38dec83c557a4e5dd1ea8, whereby 32-bit archs
would lose vdso clock_gettime after switching to 64-bit time_t, unless
the kernel supports time64 and provides a time64 version of the vdso
function. this would incur not just one but two syscalls: first, the
failed time64 syscall, then the fallback time32 one.
overflow of the 32-bit result is detected and triggers a revert to
syscalls. normally, on a system that's not Y2038-ready, this would
still overflow, but if the process has been migrated to a
time64-capable kernel or if the kernel has been hot-patched to add
time64 syscalls, it may conceivably work.
|
|
per policy, define the feature test macro to get declarations for the
pthread_tryjoin_np and pthread_timedjoin_np functions. in the past
this has been only for checking; with 32-bit archs getting 64-bit
time_t it will also be necessary for symbols to get redirected
correctly.
|
|
the time64 syscall has to be used if time_t is 64-bit, since there's
no way of knowing before making a syscall whether the result will fit
in 32 bits, and the 32-bit syscalls do not report overflow as an
error.
on 64-bit archs, there is no change to the code after preprocessing.
on current 32-bit archs, the result is now read from the kernel
through long[2] array, then copied into the timespec, to remove the
assumption that time_t is the same as long.
vdso clock_gettime is still used in place of a syscall if available.
32-bit archs with 64-bit time_t must use the time64 version of the
vdso function; if it's not available, performance will significantly
suffer. support for both vdso functions could be added, but would
break the ability to move a long-lived process from a pre-time64
kernel to one that can outlast Y2038 with checkpoint/resume, at least
without added hacks to identify that the 32-bit function is no longer
usable and stop using it (e.g. by seeing negative tv_sec). this
possibility may be explored in future work on the function.
|
|
the 64-bit/time64 version of the syscall is not API-compatible with
the userspace timex structure definition; fields specified as long
have type long long. so when using the time64 syscall, we have to
convert the entire structure. this was always the case for x32 as
well, but went unnoticed, meaning that clock_adjtime just passed junk
to the kernel on x32. it should be fixed now.
for the fallback case, we avoid encoding any assumptions about the new
location of the time member or naming of the legacy slots by accessing
them through a union of the kernel type and the new userspace type.
the only assumption is that the non-time members live at the same
offsets as in the (non-time64, long-based) kernel timex struct. this
property saves us from having to convert the whole thing, and avoids a
lot of additional work in compat shims.
the new code is statically unreachable for now except on x32, where it
fixes major brokenness. it is permanently unreachable on 64-bit.
|
|
without this, the SIOCGSTAMP and SIOCGSTAMPNS ioctl commands, for
obtaining timestamps, would stop working on pre-5.1 kernels after
time_t is switched to 64-bit and their values are changed to the new
time64 versions.
new code is written such that it's statically unreachable on 64-bit
archs, and on existing 32-bit archs until the macro values are changed
to activate 64-bit time_t.
|
|
without this, the SO_RCVTIMEO and SO_SNDTIMEO socket options would
stop working on pre-5.1 kernels after time_t is switched to 64-bit and
their values are changed to the new time64 versions.
new code is written such that it's statically unreachable on 64-bit
archs, and on existing 32-bit archs until the macro values are changed
to activate 64-bit time_t.
|
|
the __socketcall and __socketcall_cp macros are remnants from a really
old version of the syscall-mechanism infrastructure, and don't follow
the pattern that the "__" version of the macro returns the raw negated
error number rather than setting errno and returning -1.
for time64 purposes, some socket syscalls will need to operate on the
error value rather than returning immediately, so fix this up so they
can use it.
|
|
being "ctl" functions that take command numbers, these will be handled
like ioctl/sockopt/etc., using new command numbers for the time64
variants with an "IPC_TIME64" bit added to their values. to obtain
such a reserved bit, we reuse the IPC_64 bit, 0x100, which served only
as part of the libc-to-kernel interface, not as a public interface of
the libc functions.
using new command numbers avoids the need for compat shims (in ABIs
doing time64 through symbol redirection and compat shims) and, by
virtue of having a fixed time64 bit for all commands, we can ensure
that libc can perform the appropriate translations, even if the
application is using new commands from a newer version of the libc
headers than the libc available at runtime.
for the vast majority of 32-bit archs, the kernel {sem,shm,msq}id64_ds
definitions left padding space intended for expanding their time_t
fields to 64 bits in-place, and it would have been really nice to be
able to do time64 support that way. however the padding was almost
always in little-endian order (except on powerpc, and for msqid_ds
only on mips, where it matched the arch's byte order), and more
importantly, the alignment was overlooked. in semid_ds and msqid_ds,
the time_t members were not suitably aligned to be expanded to 64-bit,
due to the ipc_perm header consisting of 9 32-bit words -- except on
powerpc where ipc_perm contains an extra padding word. in shmid_ds,
the time_t members were suitably aligned, except that mips
(accidentally?) omitted the padding for them alltogether.
as a result, we're stuck with adding new time_t fields on the end of
the structures, and assembling the 32-bit lo/hi parts (or 16-bit hi
parts, for mips shmid_ds, which lacked sufficient reserved space for
full 32-bit hi parts) to fill them in.
all of the functional changes here are conditional on the IPC_TIME64
macro having a nonzero definition, which will only happen when
IPC_STAT is redefined for 32-bit archs, and on time_t being larger
than long, so for now the new code is all dead code.
|
|
due to the variadic signature, semctl needs to be made aware of any
new commands that take arguments. this was overlooked when commit
af55070eae5438476f921d827b7ae49e8141c3fe added SEM_STAT_ANY.
|