diff options
author | Rich Felker <dalias@aerifal.cx> | 2022-10-06 20:53:01 -0400 |
---|---|---|
committer | Rich Felker <dalias@aerifal.cx> | 2022-10-19 14:01:32 -0400 |
commit | aebd6a36449e91c06763a40121d558b6cea90d50 (patch) | |
tree | a597d9d05d15d0767e7063d287fe3f9bd80f33fe /include/semaphore.h | |
parent | d64148a8743ad9ed0594091d2ff141b1e9334d4b (diff) | |
download | musl-aebd6a36449e91c06763a40121d558b6cea90d50.tar.gz musl-aebd6a36449e91c06763a40121d558b6cea90d50.tar.bz2 musl-aebd6a36449e91c06763a40121d558b6cea90d50.tar.xz musl-aebd6a36449e91c06763a40121d558b6cea90d50.zip |
fix potential deadlock between multithreaded fork and aio
as reported by Alexey Izbyshev, there is a lock order inversion
deadlock between the malloc lock and aio maplock at MT-fork time:
_Fork attempts to take the aio maplock while fork already has the
malloc lock, but a concurrent aio operation holding the maplock may
attempt to allocate memory.
move the __aio_atfork calls in the parent from _Fork to fork, and
reorder the lock before most other locks, since nothing else depends
on aio(*). this leaves us with the possibility that the child will not
be able to obtain the read lock, if _Fork is used directly and happens
concurrent with an aio operation. however, in that case, the child
context is an async signal context that cannot call any further aio
functions, so all we need is to ensure that close does not attempt to
perform any aio cancellation. this can be achieved just by nulling out
the map pointer.
(*) even if other functions call close, they will only need a read
lock, not a write lock, and read locks being recursive ensures they
can obtain it. moreover, the number of read references held is bounded
by something like twice the number of live threads, meaning that the
read lock count cannot saturate.
Diffstat (limited to 'include/semaphore.h')
0 files changed, 0 insertions, 0 deletions