Age | Commit message (Collapse) | Author | Files | Lines |
|
since CLOCKS_PER_SEC is 1000000 (required by XSI) and the times
syscall reports values in 1/100 second units (Linux), the correct
scaling factor is 10000, not 100. note that only ancient kernels which
lack clock_gettime are affected.
|
|
all return values are valid, and on 32-bit systems, values that look
like errors can and will occur. since the only actual error this
function could return is EFAULT, and it is only returnable when the
application has invoked undefined behavior, simply ignore the
possibility that the return value is actually an error code.
|
|
|
|
|
|
the issue at hand is that many syscalls require as an argument the
kernel-ABI size of sigset_t, intended to allow the kernel to switch to
a larger sigset_t in the future. previously, each arch was defining
this size in syscall_arch.h, which was redundant with the definition
of _NSIG in bits/signal.h. as it's used in some not-quite-portable
application code as well, _NSIG is much more likely to be recognized
and understood immediately by someone reading the code, and it's also
shorter and less cluttered.
note that _NSIG is actually 65/129, not 64/128, but the division takes
care of throwing away the off-by-one part.
|
|
this way they'll go into .rodata, decreasing memory pressure.
|
|
report/patch by Hiltjo Posthuma <hiltjo@codemadness.org>
|
|
this mirrors the stdio_impl.h cleanup. one header which is not
strictly needed, errno.h, is left in pthread_impl.h, because since
pthread functions return their error codes rather than using errno,
nearly every single pthread function needs the errno constants.
in a few places, rather than bringing in string.h to use memset, the
memset was replaced by direct assignment. this seems to generate much
better code anyway, and makes many functions which were previously
non-leaf functions into leaf functions (possibly eliminating a great
deal of bloat on some platforms where non-leaf functions require ugly
prologue and/or epilogue).
|
|
|
|
to deal with the fact that the public headers may be used with pre-c99
compilers, __restrict is used in place of restrict, and defined
appropriately for any supported compiler. we also avoid the form
[restrict] since older versions of gcc rejected it due to a bug in the
original c99 standard, and instead use the form *restrict.
|
|
some minor changes to how hard-coded sets for thread-related purposes
are handled were also needed, since the old object sizes were not
necessarily sufficient. things have gotten a bit ugly in this area,
and i think a cleanup is in order at some point, but for now the goal
is just to get the code working on all supported archs including mips,
which was badly broken by linux rejecting syscalls with the wrong
sigset_t size.
|
|
|
|
the old code could be kept for cases where SYS_utime is available, but
it's not really worth the ifdef ugliness. and better to avoid
deprecated stuff just in case the kernel devs ever get crazy enough to
start removing it from archs where it was part of the ABI and breaking
static bins...
|
|
i did some testing trying to switch malloc to use the new internal
lock with priority inheritance, and my malloc contention test got
20-100 times slower. if priority inheritance futexes are this slow,
it's simply too high a price to pay for avoiding priority inversion.
maybe we can consider them somewhere down the road once the kernel
folks get their act together on this (and perferably don't link it to
glibc's inefficient lock API)...
as such, i've switch __lock to use malloc's implementation of
lightweight locks, and updated all the users of the code to use an
array with a waiter count for their locks. this should give optimal
performance in the vast majority of cases, and it's simple.
malloc is still using its own internal copy of the lock code because
it seems to yield measurably better performance with -O3 when it's
inlined (20% or more difference in the contention stress test).
|
|
|
|
|
|
|
|
|
|
the changes to syscall_ret are mostly no-ops in the generated code,
just cleanup of type issues and removal of some implementation-defined
behavior. the one exception is the change in the comparison value,
which is fixed so that 0xf...f000 (which in principle could be a valid
return value for mmap, although probably never in reality) is not
treated as an error return.
|
|
|
|
|
|
|
|
it's missing at least:
- derived fields
- week numbers
- short year (without century) support
- locale modifiers
|
|
|
|
it previously was returning the pseudo-monotonic-realtime clock
returned by times() rather than process cputime. it also violated C
namespace by pulling in times().
we now use clock_gettime() if available because times() has
ridiculously bad resolution. still provide a fallback for ancient
kernels without clock_gettime.
|
|
due to the barrier, it's safe just to block signals in the new thread,
rather than blocking and unblocking in the parent thread.
|
|
|
|
if a timer thread leaves signals unblocked, any future attempt by the
main thread to prevent the process from being terminated by blocking
signals will fail, since the signal can still be delivered to the
timer thread.
|
|
this works around pcc's lack of working support for weak references,
and in principle is nice because it gets us back to the stage where
the only weak symbol feature we use is weak aliases, nothing else.
having fewer dependencies on fancy linker features is a good thing.
|
|
|
|
|
|
these changes also make it so clock_gettime(CLOCK_REALTIME, &ts) works
even on pre-2.6 kernels, emulated via the gettimeofday syscall. there
is no cost for the fallback check, as it falls under the error case
that already must be checked for storing the error code in errno, but
which would normally be hidden inside __syscall_ret.
|
|
|
|
|
|
|
|
the new approach relies on the fact that the only ways to create
sigset_t objects without invoking UB are to use the sig*set()
functions, or from the masks returned by sigprocmask, sigaction, etc.
or in the ucontext_t argument to a signal handler. thus, as long as
sigfillset and sigaddset avoid adding the "protected" signals, there
is no way the application will ever obtain a sigset_t including these
bits, and thus no need to add the overhead of checking/clearing them
when sigprocmask or sigaction is called.
note that the old code actually *failed* to remove the bits from
sa_mask when sigaction was called.
the new implementations are also significantly smaller, simpler, and
faster due to ignoring the useless "GNU HURD signals" 65-1024, which
are not used and, if there's any sanity in the world, never will be
used.
|
|
this patch improves the correctness, simplicity, and size of
cancellation-related code. modulo any small errors, it should now be
completely conformant, safe, and resource-leak free.
the notion of entering and exiting cancellation-point context has been
completely eliminated and replaced with alternative syscall assembly
code for cancellable syscalls. the assembly is responsible for setting
up execution context information (stack pointer and address of the
syscall instruction) which the cancellation signal handler can use to
determine whether the interrupted code was in a cancellable state.
these changes eliminate race conditions in the previous generation of
cancellation handling code (whereby a cancellation request received
just prior to the syscall would not be processed, leaving the syscall
to block, potentially indefinitely), and remedy an issue where
non-cancellable syscalls made from signal handlers became cancellable
if the signal handler interrupted a cancellation point.
x86_64 asm is untested and may need a second try to get it right.
|
|
otherwise we cannot support an application's desire to use
asynchronous cancellation within the callback function. this change
also slightly debloats pthread_create.c.
|
|
|
|
calling pthread_exit from, or pthread_cancel on, the timer callback
thread will no longer destroy the timer.
|
|
|
|
since timer_create is no longer allocating a structure for the timer_t
and simply using the kernel timer id, it was impossible to specify the
timer_t as the argument to the signal handler. the solution is to pass
the null sigevent pointer on to the kernel, rather than filling it in
userspace, so that the kernel does the right thing. however, that
precludes the clever timerid-versus-threadid encoding we were doing.
instead, just assume timerids are below 1M and thread pointers are
above 1M. (in perspective: timerids are sequentially allocated and
seem limited to 32k, and thread pointers are at roughly 3G.)
|
|
|
|
this is necessary in order to avoid breaking timer_getoverrun in the
last run of the timer event handler, if it has not yet finished.
|
|
|
|
instead of allocating a userspace structure for signal-based timers,
simply use the kernel timer id. we use the fact that thread pointers
will always be zero in the low bit (actually more) to encode integer
timerid values as pointers.
also, this change ensures that the timer_destroy syscall has completed
before the library timer_destroy function returns, in case it matters.
|
|
the major idea of this patch is not to depend on having the timer
pointer delivered to the signal handler, and instead use the thread
pointer to get the callback function address and argument. this way,
the parent thread can make the timer_create syscall while the child
thread is starting, and it should never have to block waiting for the
barrier.
|
|
this allows small programs which only create times, but never delete
them, to use simple_malloc instead of the full malloc.
|
|
this implementation is superior to the glibc/nptl implementation, in
that it gives true realtime behavior. there is no risk of timer
expiration events being lost due to failed thread creation or failed
malloc, because the thread is created as time creation time, and
reused until the timer is deleted.
|
|
this commit addresses two issues:
1. a race condition, whereby a cancellation request occurring after a
syscall returned from kernelspace but before the subsequent
CANCELPT_END would cause cancellable resource-allocating syscalls
(like open) to leak resources.
2. signal handlers invoked while the thread was blocked at a
cancellation point behaved as if asynchronous cancellation mode wer in
effect, resulting in potentially dangerous state corruption if a
cancellation request occurs.
the glibc/nptl implementation of threads shares both of these issues.
with this commit, both are fixed. however, cancellation points
encountered in a signal handler will not be acted upon if the signal
was received while the thread was already at a cancellation point.
they will of course be acted upon after the signal handler returns, so
in real-world usage where signal handlers quickly return, it should
not be a problem. it's possible to solve this problem too by having
sigaction() wrap all signal handlers with a function that uses a
pthread_cleanup handler to catch cancellation, patch up the saved
context, and return into the cancellable function that will catch and
act upon the cancellation. however that would be a lot of complexity
for minimal if any benefit...
|