summaryrefslogtreecommitdiff
path: root/src/thread/pthread_barrier_wait.c
AgeCommit message (Collapse)AuthorFilesLines
2012-08-17fix extremely rare but dangerous race condition in robust mutexesRich Felker1-19/+4
if new shared mappings of files/devices/shared memory can be made between the time a robust mutex is unlocked and its subsequent removal from the pending slot in the robustlist header, the kernel can inadvertently corrupt data in the newly-mapped pages when the process terminates. i am fixing the bug by using the same global vm lock mechanism that was used to fix the race condition with unmapping barriers after pthread_barrier_wait returns.
2011-09-28fix excessive/insufficient wakes in __vm_unlockRich Felker1-3/+3
there is no need to send a wake when the lock count does not hit zero, but when it does, all waiters must be woken (since all with the same sign are eligible to obtain the lock).
2011-09-28improve pshared barriersRich Felker1-11/+13
eliminate the sequence number field and instead use the counter as the futex because of the way the lock is held, sequence numbers are completely useless, and this frees up a field in the barrier structure to be used as a waiter count for the count futex, which lets us avoid some syscalls in the best case. as of now, self-synchronized destruction and unmapping should be fully safe. before any thread can return from the barrier, all threads in the barrier have obtained the vm lock, and each holds a shared lock on the barrier. the barrier memory is not inspected after the shared lock count reaches 0, nor after the vm lock is released.
2011-09-28next step making barrier self-sync'd destruction safeRich Felker1-4/+12
i think this works, but it can be simplified. (next step)
2011-09-27correctly handle the degenerate barrier in the pshared caseRich Felker1-1/+1
2011-09-27fix pshared barrier wrong return value.Rich Felker1-1/+1
i set the return value but then never used it... oops!
2011-09-27process-shared barrier support, based on discussion with bdonlanRich Felker1-7/+67
this implementation is rather heavy-weight, but it's the first solution i've found that's actually correct. all waiters actually wait twice at the barrier so that they can synchronize exit, and they hold a "vm lock" that prevents changes to virtual memory mappings (and blocks pthread_barrier_destroy) until all waiters are finished inspecting the barrier. thus, it is safe for any thread to destroy and/or unmap the barrier's memory as soon as pthread_barrier_wait returns, without further synchronization.
2011-05-06remove debug code that was missed in barrier commitRich Felker1-1/+0
2011-05-06completely new barrier implementation, addressing major correctness issuesRich Felker1-16/+44
the previous implementation had at least 2 problems: 1. the case where additional threads reached the barrier before the first wave was finished leaving the barrier was untested and seemed not to be working. 2. threads leaving the barrier continued to access memory within the barrier object after other threads had successfully returned from pthread_barrier_wait. this could lead to memory corruption or crashes if the barrier object had automatic storage in one of the waiting threads and went out of scope before all threads finished returning, or if one thread unmapped the memory in which the barrier object lived. the new implementation avoids both problems by making the barrier state essentially local to the first thread which enters the barrier wait, and forces that thread to be the last to return.
2011-02-17reorganize pthread data structures and move the definitions to alltypes.hRich Felker1-11/+11
this allows sys/types.h to provide the pthread types, as required by POSIX. this design also facilitates forcing ABI-compatible sizes in the arch-specific alltypes.h, while eliminating the need for developers changing the internals of the pthread types to poke around with arch-specific headers they may not be able to test.
2011-02-12initial check-in, version 0.5.0v0.5.0Rich Felker1-0/+31