Age | Commit message (Collapse) | Author | Files | Lines |
|
the original wcsnrtombs implementation, which has been largely
untouched since 0.5.0, attempted to build input-length-limiting
conversion on top of wcsrtombs, which only limits output length. as
best I recall, this choice was made out of a mix of disdain over
having yet another variant function to implement (added in POSIX 2008;
not standard C) and preference not to switch things around and
implement the wcsrtombs in terms of the more general new function,
probably over namespace issues. the strategy employed was to impose
output limits that would ensure the input limit wasn't exceeded, then
finish up the tail character-at-a-time. unfortunately, none of that
worked correctly.
first, the logic in the wcsrtombs loop was wrong in that it could
easily get stuck making no forward progress, by imposing an output
limit too small to convert even one character.
the character-at-a-time loop that followed was even worse. it made no
effort to ensure that the converted multibyte character would fit in
the remaining output space, only that there was a nonzero amount of
output space remaining. it also employed an incorrect interpretation
of wcrtomb's interface contract for converting the null character,
thereby failing to act on end of input, and remaining space accounting
was subject to unsigned wrap-around. together these errors allow
unbounded overflow of the destination buffer, controlled by input
length limit and input wchar_t string contents.
given the extent to which this function was broken, it's plausible
that most applications that would have been rendered exploitable were
sufficiently broken not to be usable in the first place. however, it's
also plausible that common (especially ASCII-only) inputs succeeded in
the wcsrtombs loop, which mostly worked, while leaving the wildly
erroneous code in the second loop exposed to particular non-ASCII
inputs.
CVE-2020-28928 has been assigned for this issue.
|
|
after a non-normal-type process-shared mutex is unlocked, it's
immediately available to another thread to lock, unlock, and destroy,
but the first unlocking thread may still have a pointer to it in its
robust_list pending slot. this means, on async process termination,
the kernel may attempt to access and modify the memory that used to
contain the mutex -- memory that may have been reused for some other
purpose after the mutex was destroyed.
setting up for this kind of race to occur is difficult to begin with,
requiring dynamic use of shared memory maps, and actually hitting the
race is very difficult even with a suitable setup. so this is mostly a
theoretical fix, but in any case the cost is very low.
|
|
the __vm_wait operation can delay forward progress arbitrarily long if
a thread holding the lock is interrupted by a signal. in a worst case
this can deadlock. any critical section holding the thread list lock
must respect lock ordering contracts and must not take any lock which
is not AS-safe.
to fix, move the determination of thread joinable/detached state to
take place before the killlock and thread list lock are taken. this
requires reverting the atomic state transition if we determine that
the exiting thread is the last thread and must call exit, but that's
easy to do since it's a single-threaded context with application
signals blocked.
|
|
as the outcome of Austin Group tracker issue #62, future editions of
POSIX have dropped the requirement that fork be AS-safe. this allows
but does not require implementations to synchronize fork with internal
locks and give forked children of multithreaded parents a partly or
fully unrestricted execution environment where they can continue to
use the standard library (per POSIX, they can only portably use
AS-safe functions).
up until recently, taking this allowance did not seem desirable.
however, commit 8ed2bd8bfcb4ea6448afb55a941f4b5b2b0398c0 exposed the
extent to which applications and libraries are depending on the
ability to use malloc and other non-AS-safe interfaces in MT-forked
children, by converting latent very-low-probability catastrophic state
corruption into predictable deadlock. dealing with the fallout has
been a huge burden for users/distros.
while it looks like most of the non-portable usage in applications
could be fixed given sufficient effort, at least some of it seems to
occur in language runtimes which are exposing the ability to run
unrestricted code in the child as part of the contract with the
programmer. any attempt at fixing such contracts is not just a
technical problem but a social one, and is probably not tractable.
this patch extends the fork function to take locks for all libc
singletons in the parent, and release or reset those locks in the
child, so that when the underlying fork operation takes place, the
state protected by these locks is consistent and ready for the child
to use. locking is skipped in the case where the parent is
single-threaded so as not to interfere with legacy AS-safety property
of fork in single-threaded programs. lock order is mostly arbitrary,
but the malloc locks (including bump allocator in case it's used) must
be taken after the locks on any subsystems that might use malloc, and
non-AS-safe locks cannot be taken while the thread list lock is held,
imposing a requirement that it be taken last.
|
|
this change lifts undocumented restrictions on calls by replacement
mallocs to libc functions that might take these locks, and sets the
stage for lifting restrictions on the child execution environment
after multithreaded fork.
care is taken to #define macros to replace all four functions (malloc,
calloc, realloc, free) even if not all of them will be used, using an
undefined symbol name for the ones intended not to be used so that any
inadvertent future use will be caught at compile time rather than
directed to the wrong implementation.
|
|
allowing the application to replace malloc (since commit
c9f415d7ea2dace5bf77f6518b6afc36bb7a5732) has brought multiple
headaches where it's used from various critical sections in libc
components. for example:
- the thread-local message buffers allocated for dlerror can't be
freed at thread exit time because application code would then run in
the context of a non-existant thread. this was handled in commit
aa5a9d15e09851f7b4a1668e9dbde0f6234abada by queuing them for free
later.
- the dynamic linker has to be careful not to pass memory allocated at
early startup time (necessarily using its own malloc) to realloc or
free after redoing relocations with the application and all
libraries present. bugs in this area were fixed several times, at
least in commits 0c5c8f5da6e36fe4ab704bee0cd981837859e23f and
2f1f51ae7b2d78247568e7fdb8462f3c19e469a4 and possibly others.
- by calling the allocator from contexts where libc-internal locks are
held, we impose undocumented requirements on alternate malloc
implementations not to call into any libc function that might
attempt to take these locks; if they do, deadlock results.
- work to make fork of a multithreaded parent give the child an
unrestricted execution environment is blocked by lock order issues
as long as the application-provided allocator can be called with
libc-internal locks held.
these problems are all fixed by giving libc internals access to the
original, non-replaced allocator, for use where needed. it can't be
used everywhere, as some interfaces like str[n]dup, open_[w]memstream,
getline/getdelim, etc. are required to provide the called memory
obtained as if by (the public) malloc. and there are a number of libc
interfaces that are "pure library" code, not part of some internal
singleton, and where using the application's choice of malloc
implementation is preferable -- things like glob, regex, etc.
one might expect there to be significant cost to static-linked
programs, pulling in two malloc implementations, one of them
mostly-unused, if malloc is replaced. however, in almost all of the
places where malloc is used internally, care has been taken already
not to pull in realloc/free (i.e. to link with just the bump
allocator). this size optimization carries over automatically.
the newly-exposed internal allocator functions are obtained by
renaming the actual definitions, then adding new wrappers around them
with the public names. technically __libc_realloc and __libc_free
could be aliases rather than needing a layer of wrapper, but this
would almost surely break certain instrumentation (valgrind) and the
size and performance difference is negligible. __libc_calloc needs to
be handled specially since calloc is designed to work with either the
internal or the replaced malloc.
as a bonus, this change also eliminates the longstanding ugly
dependency of the static bump allocator on order of object files in
libc.a, by making it so there's only one definition of the malloc
function and having it in the same source file as the bump allocator.
|
|
thread-local buffers allocated for dlerror need to be queued for free
at a later time when the owning thread exits, since malloc may be
replaced by application code and the exiting context is not valid to
call application code from. the code to process queue of pending
frees, introduced in commit aa5a9d15e09851f7b4a1668e9dbde0f6234abada,
gratuitously held the lock for the entire duration of queue
processing, updating the global queue pointer after each free, despite
there being no logical requirement that all frees finish before
another thread can access the queue.
instead, immediately claim the whole queue for freeing and release the
lock, then walk the list and perform frees without the lock held. the
change is unlikely to make any meaningful difference to performance,
but it eliminates one point where the allocator is called under an
internal lock. since the allocator may be application-provided, such
calls are undesirable because they allow application code to impede
forward progress of libc functions in other threads arbitrarily long,
and to induce deadlock if it calls a libc function that requires the
same lock.
the change also eliminates a lock ordering consideration that's an
impediment upcoming work with multithreaded fork.
|
|
introduced in commit 27b2fc9d6db956359727a66c262f1e69995660aa.
|
|
the reasoning in commit 2d0bbe6c788938d1332609c014eeebc1dff966ac was
not entirely correct. while it's true that setting the waiters flag
ensures that the next unlock will perform a wake, it's possible that
the wake is consumed by a mutex waiter that has no relationship with
the condvar wait queue being processed, which then takes the mutex.
when that thread subsequently unlocks, it sees no waiters, and leaves
the rest of the condvar queue stuck.
bring back the waiter count adjustment, but skip it for PI mutexes,
for which a successful lock-after-waiting always sets the waiters bit.
if future changes are made to bring this same waiters-bit contract to
all lock types, this can be reverted.
|
|
This is like SIGEV_SIGNAL, but targeted to a particular thread's
tid, rather than the process.
|
|
sem_open is required to return the same sem_t pointer for all
references to the same named semaphore when it's opened more than once
in the same process. thus we keep a table of all the mapped semaphores
and their reference counts. the code path for sem_close checked the
reference count, but then proceeded to unmap the semaphore regardless
of whether the count had reached zero.
add an immediate unlock-and-return for the nonzero refcnt case so the
property of performing the munmap syscall after releasing the lock can
be preserved.
|
|
this avoids some spurious negation and duplicated errno logic, and
brings the code in line with the newly-added multithreaded setgroups.
|
|
this function is outside the scope of the standards, but logically
should behave like the set*id functions whose effects are
process-global.
|
|
resource limits have been process-wide since linux 2.6.10, and the
prlimit syscall was added in 2.6.36, so prlimit can be assumed to set
the resource limits correctly for the whole process.
|
|
commit bd153422f28634bb6e53f13f80beb8289d405267 reintroduced the bug
fixed in c21051e90cd27a0b26be0ac66950b7396a156ba1 by refactoring the
__syscall_ret into _Fork where it once again runs before the atfork
handlers are called. since _Fork is a public interface that sets
errno, this can't be fixed the way it was fixed last time without
making new internal interfaces. instead, just save errno, and restore
it only on error to ensure that a value of 0 is never restored.
|
|
pthread_cond_wait arranged for requeued waiters to wake when the mutex
is unlocked by temporarily adjusting the mutex's waiter count. commit
54ca677983d47529bab8752315ac1a2b49888870 broke this when introducing
PI mutexes by repurposing the waiter count field of the mutex
structure. since then, for PI mutexes, the waiter count adjustment was
misinterpreted by the mutex locking code as indicating that the mutex
is non a non-recoverable state.
it would be possible to special-case PI mutexes here, but instead just
drop all adjustment of the waiters count, and instead use the lock
word waiters bit for all mutex types. since the mutex is either held
by the caller or in unrecoverable state at the time the bit is set, it
will necessarily still be set at the time of any subsequent valid
unlock operation, and this will produce the desired effect of waking
the next waiter.
if waiter counts are entirely dropped at some point in the future this
code should still work without modification.
|
|
commit 25ea9f712c30c32957de493d4711ee39d0bbb024 introduced a deadlock
to the posix_spawn child whereby, if abort was called in the parent
and ended up taking the abort lock to terminate the process, the
__libc_sigaction calls in the child would wait forever to obtain a
lock that would not be released. this could be fixed by having abort
set the abort lock as the exit futex address, but it's cleaner to just
remove the SIGABRT special handling from the internal __libc_sigaction
and lift it to the public sigaction function.
nothing but the posix_spawn child calls __libc_sigaction on SIGABRT,
and since commit b7bc966522d73e1dc420b5ee6fc7a2e78099a08c the abort
lock is held at the time of __clone, which precludes the child
inheriting a kernel-level signal disposition inconsistent with the
disposition on the abstract machine. this means it's fine to inspect
and modify the disposition in the child without a lock.
|
|
Merge changes from Solar Designer's crypt_blowfish v1.3. This makes
crypt_blowfish fully compatible with OpenBSD's bcrypt by adding
support for the $2b$ prefix (which behaves the same as
crypt_blowfish's $2y$).
|
|
|
|
also fix the lack of declaration (and thus hidden visibility) in
__stdio_close's use of __aio_close.
|
|
commit 3990c5c6a40440cdb14746ac080d0ecf8d5d6733 removed the last
reference.
|
|
this makes the code slightly smaller and eliminates timer_create from
relevance to possible future changes to multithreaded fork.
the barrier of a_store isn't technically needed here, but a_store is
used anyway for internal consistency of the memory model.
|
|
this was leftover from when the actual SIGEV_THREAD timer logic was in
the signal handler. commit 5b74eed3b301e2227385f3bf26d3bb7c2d822cf8
replaced that with use of sigwaitinfo, with the actual signal left
blocked, so the no-op signal handler was no longer serving any
purpose.
the signal disposition reset to SIG_DFL is still needed, however, in
case we inherited SIG_IGN from a foreign-libc process.
|
|
assert is not specified to flush open stdio streams, and doing so can
block indefinitely waiting for a lock already held or an output
operation to a file that can't accept more output until an
unsatisfiable condition is met.
|
|
commit 500c6886c654fd45e4926990fee2c61d816be197 broke this by fixing
the behavior of fread to conform to the C standard; getgroupslist was
assuming the old behavior, that a request to read 1 member of length 0
would return 1, not 0.
|
|
this change prevents the child created concurrently with abort from
seeing the SIGABRT disposition change from SIG_IGN to SIG_DFL (other
changes are not visible anyway) and prevents leaking the write end of
the child pipe to children created by fork in another thread, which
may block return of posix_spawn indefinitely if the forked child does
not exit or exec.
along with other changes, this suggests that __abort_lock should
perhaps eventually be renamed to reflect that it's becoming a broader
lock on related "process lifetime" state.
|
|
the existing abort locking logic in sigaction only accounted for
attempts to change the disposition, not attempts to observe the change
made by abort.
unfortunately the change is still observable in at least one other
place: inheritance of signal dispositions across exec and posix_spawn.
fixing these is a separate task and it's not even clear whether a
complete fix is possible.
|
|
the _Fork interface is defined for future issue of POSIX as the
outcome of Austin Group issue 62, which drops the AS-safety
requirement for fork, and provides an AS-safe replacement that does
not run the registered atfork handlers.
|
|
this is in preparation for implementing _Fork from POSIX-future,
factored as a separate commit to improve readability of history.
|
|
this makes the code slightly smaller and eliminates these functions
from relevance to possible future changes to multithreaded fork.
the barrier of a_store isn't technically needed here, but a_store is
used anyway for internal consistency of the memory model.
|
|
if the multithreaded parent forked while another thread was calling
sigaction for SIGABRT or calling abort, the child could inherit a lock
state in which future calls to abort will deadlock, or in which the
disposition for SIGABRT has already been reset to SIG_DFL. this is
nonconforming since abort is AS-safe and permitted to be called
concurrently with fork or in the MT-forked child.
|
|
the dummy definition of __abort_lock in sigaction.c was performing
exactly the same role that putting the lock in its own source file
could and should have been used to achieve.
while we're moving it, give it a proper declaration.
|
|
previously, if a file descriptor had aio operations pending in the
parent before fork, attempting to close it in the child would attempt
to cancel a thread belonging to the parent. this could deadlock, fail,
or crash the whole process of the cancellation signal handler was not
yet installed in the parent. in addition, further use of aio from the
child could malfunction or deadlock.
POSIX specifies that async io operations are not inherited by the
child on fork, so clear the entire aio fd map in the child, and take
the aio map lock (with signals blocked) across the fork so that the
lock is kept in a consistent state.
|
|
taking the deprecated/dropped vfork spec strictly, doing pretty much
anything but execve in the child is wrong and undefined. however,
these are commonly needed operations to setup the child state before
exec, and historical implementations tolerated them.
for single-threaded parents, these operations already worked as
expected in the vforked child. however, due to the need for __synccall
to synchronize id/resource limit changes among all threads, calling
these functions in the vforked child of a multithreaded parent caused
a misdirected broadcast signaling of all threads in the parent. these
signals could kill the parent entirely if the synccall signal handler
had never been installed in the parent, or could be ignored if it had,
or could signal/kill one or more utterly wrong processes if the parent
already terminated (due to vfork semantics, only possible via fatal
signal) and the parent tids were recycled. in any case, the expected
number of semaphore posts would never happen, so the child would
permanently hang (with all signals blocked) waiting for them.
to mitigate this, and also make the normal usage case work as
intended, treat the condition where the caller's actual tid does not
match the tid in its thread structure as single-threaded, and bypass
the entire synccall broadcast operation.
|
|
commit 0a05eace163cee9b08571d2ff9d90f5e82d9c228 implemented AT_EACCESS
for faccessat with a horrible hack, creating a child process to change
switch uid/gid and perform the access probe without making potentially
irreversible changes to the caller's credentials. this was due to the
syscall lacking a flags argument.
linux 5.8 introduced a new syscall, SYS_faccessat2, fixing this
deficiency. use it if any flags are passed, and fallback to the old
strategy on ENOSYS. continue using the old syscall when there are no
flags.
|
|
|
|
this code is only needed for pre-2.6 kernels, which are not actually
supported anyway, and was never tested. the fallback path using
SYS_modify_ldt failed to clear the upper bits of %eax (all ones due to
SYS_set_thread_area's return value being an error) before modifying
%al to attempt a new syscall.
|
|
prior to commit e68c51ac46a9f273927aef8dcebc89912ab19ece, h_errno was
actually an external data object not a macro. bring back the symbol,
and use it as the storage for the main thread's h_errno.
technically this still doesn't provide full compatibility if the
application was multithreaded, but at the time there were no res_*
functions (and they did not set h_errno anyway), so any use of h_errno
would have been via thread-unsafe functions. thus a solution that just
fixes single-threaded applications seems acceptable.
|
|
now that struct winsize is available via sys/ioctl.h once again,
including termios.h is not needed.
|
|
dtv_copy, canary2, and canary_at_end existed solely to match multiple
ABI and asm-accessed layouts simultaneously. now that pthread_arch.h
can be included before struct __pthread is defined, the struct layout
can depend on macros defined by pthread_arch.h.
|
|
the adjustment made is entirely a function of TLS_ABOVE_TP and
TP_OFFSET. aside from avoiding repetition of the TP_OFFSET value and
arithmetic, this change makes pthread_arch.h independent of the
definition of struct __pthread from pthread_impl.h. this in turn will
allow inclusion of pthread_arch.h to be moved to the top of
pthread_impl.h so that it can influence the definition of the
structure.
previously, arch files were very inconsistent about the type used for
the thread pointer. this change unifies the new __get_tp interface to
always use uintptr_t, which is the most correct when performing
arithmetic that may involve addresses outside the actual pointed-to
object (due to TP_OFFSET).
|
|
the only part of TP_ADJ that was not uniquely determined by
TLS_ABOVE_TP was the 0x7000 adjustment used mainly on mips and powerpc
variants.
|
|
while it's not clearly documented anywhere, this is the historical
behavior which some applications expect. applications which need to
see the response packet in these cases, for example to distinguish
between nonexistence in a secure vs insecure zone, must already use
res_mkquery with res_send in order to be portable, since most if not
all other implementations of res_query don't provide it.
|
|
the framework to do this always existed but it was deemed unnecessary
because the only [ex-]standard functions using h_errno were not
thread-safe anyway. however, some of the nonstandard res_* functions
are also supposed to set h_errno to indicate the cause of error, and
were unable to do so because it was not thread-safe. this change is a
prerequisite for fixing them.
|
|
these have been adopted for future issue of POSIX as the outcome of
Austin Group issue 1151, and are simply functions performing the roles
of the historical ioctls. since struct winsize is being standardized
along with them, its definition is moved to the appropriate header.
there is some chance this will break source files that expect struct
winsize to be defined by sys/ioctl.h without including termios.h. if
this happens, further changes will be needed to have sys/ioctl.h
expose it too.
|
|
all path elements but the last had the final byte truncated.
|
|
this is a prerequisite for addition of other interfaces that use
kernel tids, including futex and SIGEV_THREAD_ID.
there is some ambiguity as to whether the semantic return type should
be int or pid_t. either way, futex API imposes a contract that the
values fit in int (excluding some upper reserved bits). glibc used
pid_t, so in the interest of not having gratuitous mismatch (the
underlying types are the same anyway), pid_t is used here as well.
while conceptually this is a syscall, the copy stored in the thread
structure is always valid in all contexts where it's valid to call
libc functions, so it's used to avoid the syscall.
|
|
longjmp should set the return value of setjmp, but 64bit
registers were used for the 0 check while the type is int.
use the code that gcc generates for return val ? val : 1;
|
|
Use a branchless sequence that is one byte shorter on 64-bit, same size
on 32-bit. Thanks to Pete Cawley for suggesting this variant.
|
|
|