Age | Commit message (Collapse) | Author | Files | Lines |
|
these functions have no new time64 syscall, so the existence of a
time64 syscall cannot be used as the condition for the new code.
instead, assume the syscall takes timevals as longs, which is true
everywhere but x32, and interface with the kernel through long[4]
objects.
rather than adding new hacks to special-case x32 here, just add
x32-specific source files since a trivial syscall wrapper suffices
there.
the new code paths added in this commit are statically unreachable on
all current archs, but will become reachable when 32-bit archs get
64-bit time_t.
|
|
these were overlooked in the declarations overhaul work because they
are not properly declared, and the current framework even allows their
declared types to vary by arch. at some point this should be cleaned
up, but I'm not sure what the right way would be.
|
|
gdb can only backtrace/unwind across signal handlers if it recognizes
the sa_restorer trampoline. for x86_64, gdb first attempts to
determine the symbol name for the function in which the program
counter resides and match it against "__restore_rt". if no name can be
found (e.g. in the case of a stripped binary), the exact instruction
sequence is matched instead.
when matching the function name, however, gdb's unwind code wrongly
considers the interval [sym,sym+size] rather than [sym,sym+size).
thus, if __restore_rt begins immediately after another function, gdb
wrongly identifies pc as lying within the previous adjacent function.
this patch adds a nop before __restore_rt to preclude that
possibility. it also removes the symbol name __restore and replaces it
with a macro since the stability of whether gdb identifies the
function as __restore_rt or __restore is not clear.
for the no-symbols case, the instruction sequence is changed to use
%rax rather than %eax to match what gdb expects.
based on patch by Szabolcs Nagy, with extended description and
corresponding x32 changes added.
|
|
the 64-bit push reads not only the 32-bit return address but also the
first 32 signal mask bits. if any were nonzero, the return address
obtained will be invalid.
at some point storage of the return address should probably be moved
to follow the saved mask so that there's plenty room and the same code
can be used on x32 and regular x86_64, but for now I want a fix that
does not risk breaking x86_64, and this simple re-zeroing works.
|
|
analogous to commit 8ed66ecbcba1dd0f899f22b534aac92a282f42d5 for i386.
|
|
the conventional way to implement sigsetjmp is to save the signal mask
then tail-call to setjmp; siglongjmp then restores the signal mask and
calls longjmp. the problem with this approach is that a signal already
pending, or arriving between unmasking of signals and restoration of
the saved stack pointer, will have its signal handler run on the stack
that was active before siglongjmp was called. this can lead to
unbounded stack usage when siglongjmp is used to leave a signal
handler.
in the new design, sigsetjmp saves its own return address inside the
extended part of the sigjmp_buf (outside the __jmp_buf part used by
setjmp) then calls setjmp to save a jmp_buf inside its own execution.
it then tail-calls to __sigsetjmp_tail, which uses the return value of
setjmp to determine whether to save the current signal mask or restore
a previously-saved mask.
as an added bonus, this design makes it so that siglongjmp and longjmp
are identical. this is useful because the __longjmp_chk function we
need to add for ABI-compatibility assumes siglongjmp and longjmp are
the same, but for different reasons -- it was designed assuming either
can access a flag just past the __jmp_buf indicating whether the
signal masked was saved, and act on that flag. however, early versions
of musl did not have space past the __jmp_buf for the non-sigjmp_buf
version of jmp_buf, so our setjmp cannot store such a flag without
risking clobbering memory on (very) old binaries.
|
|
|
|
|
|
|