Age | Commit message (Collapse) | Author | Files | Lines |
|
this was discussed on the mailing list and no consensus on the
preferred solution was reached, so in anticipation of a release, i'm
just committing a minimally-invasive solution that avoids the problem
by ensuring that multi-threaded-capable programs will always have
initialized the thread pointer before any signal handler can run.
in the long term we may switch to initializing the thread pointer at
program start time whenever the program has the potential to access
any per-thread data.
|
|
this port assumes eabi calling conventions, eabi linux syscall
convention, and presence of the kernel helpers at 0xffff0f?0 needed
for threads support. otherwise it makes very few assumptions, and the
code should work even on armv4 without thumb support, as well as on
systems with thumb interworking. the bits headers declare this a
little endian system, but as far as i can tell the code should work
equally well on big endian.
some small details are probably broken; so far, testing has been
limited to qemu/aboriginal linux.
|
|
|
|
|
|
|
|
if saved, signal mask would not be restored unless some low signals
were masked. if not saved, signal mask could be wrongly restored to
uninitialized values. in any, wrong mask would be restored.
i believe this function was written for a very old version of the
jmp_buf structure which did not contain a final 0 field for
compatibility with siglongjmp, and never updated...
|
|
this race is fundamentally due to linux's bogus requirement that
userspace, rather than kernelspace, fill in the siginfo structure. an
intervening signal handler that calls fork could cause both the parent
and child process to send signals claiming to be from the parent,
which could in turn have harmful effects depending on what the
recipient does with the signal. we simply block all signals for the
interval between getuid and sigqueue syscalls (much like what raise()
does already) to prevent the race and make the getuid/sigqueue pair
atomic.
this will be a non-issue if linux is fixed to validate the siginfo
structure or fill it in from kernelspace.
|
|
it's nicer for the function that doesn't use errno to be independent,
and have the other one call it. saves some time and avoids clobbering
errno.
|
|
this seems to be necessary to make the linker accept the functions in
a shared library (perhaps to generate PLT entries?)
strictly speaking libc-internal asm should not need it. i might clean
that up later.
|
|
these are useless and have caused problems for users trying to build
with non-gnu tools like tcc's assembler.
|
|
|
|
the new approach relies on the fact that the only ways to create
sigset_t objects without invoking UB are to use the sig*set()
functions, or from the masks returned by sigprocmask, sigaction, etc.
or in the ucontext_t argument to a signal handler. thus, as long as
sigfillset and sigaddset avoid adding the "protected" signals, there
is no way the application will ever obtain a sigset_t including these
bits, and thus no need to add the overhead of checking/clearing them
when sigprocmask or sigaction is called.
note that the old code actually *failed* to remove the bits from
sa_mask when sigaction was called.
the new implementations are also significantly smaller, simpler, and
faster due to ignoring the useless "GNU HURD signals" 65-1024, which
are not used and, if there's any sanity in the world, never will be
used.
|
|
this patch improves the correctness, simplicity, and size of
cancellation-related code. modulo any small errors, it should now be
completely conformant, safe, and resource-leak free.
the notion of entering and exiting cancellation-point context has been
completely eliminated and replaced with alternative syscall assembly
code for cancellable syscalls. the assembly is responsible for setting
up execution context information (stack pointer and address of the
syscall instruction) which the cancellation signal handler can use to
determine whether the interrupted code was in a cancellable state.
these changes eliminate race conditions in the previous generation of
cancellation handling code (whereby a cancellation request received
just prior to the syscall would not be processed, leaving the syscall
to block, potentially indefinitely), and remedy an issue where
non-cancellable syscalls made from signal handlers became cancellable
if the signal handler interrupted a cancellation point.
x86_64 asm is untested and may need a second try to get it right.
|
|
otherwise we cannot support an application's desire to use
asynchronous cancellation within the callback function. this change
also slightly debloats pthread_create.c.
|
|
|
|
|
|
sadly the C language does not specify any such implicit conversion, so
this is not a matter of just fixing warnings (as gcc treats it) but
actual errors. i would like to revisit a number of these changes and
possibly revise the types used to reduce the number of casts required.
|
|
this commit addresses two issues:
1. a race condition, whereby a cancellation request occurring after a
syscall returned from kernelspace but before the subsequent
CANCELPT_END would cause cancellable resource-allocating syscalls
(like open) to leak resources.
2. signal handlers invoked while the thread was blocked at a
cancellation point behaved as if asynchronous cancellation mode wer in
effect, resulting in potentially dangerous state corruption if a
cancellation request occurs.
the glibc/nptl implementation of threads shares both of these issues.
with this commit, both are fixed. however, cancellation points
encountered in a signal handler will not be acted upon if the signal
was received while the thread was already at a cancellation point.
they will of course be acted upon after the signal handler returns, so
in real-world usage where signal handlers quickly return, it should
not be a problem. it's possible to solve this problem too by having
sigaction() wrap all signal handlers with a function that uses a
pthread_cleanup handler to catch cancellation, patch up the saved
context, and return into the cancellable function that will catch and
act upon the cancellation. however that would be a lot of complexity
for minimal if any benefit...
|
|
|
|
with this patch, the syscallN() functions are no longer needed; a
variadic syscall() macro allows syscalls with anywhere from 0 to 6
arguments to be made with a single macro name. also, manually casting
each non-integer argument with (long) is no longer necessary; the
casts are hidden in the macros.
some source files which depended on being able to define the old macro
SYSCALL_RETURNS_ERRNO have been modified to directly use __syscall()
instead of syscall(). references to SYSCALL_SIGSET_SIZE and SYSCALL_LL
have also been changed.
x86_64 has not been tested, and may need a follow-up commit to fix any
minor bugs/oversights.
|
|
1. any padding in the siginfo struct was not necessarily zero-filled,
so it might have contained private data off the caller's stack.
2. the uid and pid must be filled in from userspace. the previous
rsyscall fix broke rsyscalls because the values were always incorrect.
|
|
|
|
|
|
|
|
POSIX allows either behavior, but sigwait is not allowed to fail with
EINTR, so the retry loop would have to be in one or the other anyway.
|
|
|
|
|
|
it must return errno, not -1, and should reject invalud values for how.
|
|
a signal handler could fork after the pid/tid were read, causing the
wrong process to be signalled. i'm not sure if this is supposed to
have UB or not, but raise is async-signal-safe, so it probably is
allowed. the current solution is slightly expensive so this
implementation is likely to be changed in the future.
|
|
|
|
|
|
this code was wrongly disabled because the old version was trying to
be too clever and didn't work. replaced it with a simple version for
now.
|
|
|
|
|
|
|
|
|
|
|