| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
page fault panic in kernel under heavy swapping.
Notes:
svn path=/head/; revision=143149
|
|
|
|
|
|
|
|
|
|
|
| |
and always has been, but the system call itself returns
errno in a register so the problem is really a function of
libc, not the system call.
Discussed with : Matthew Dillion <dillon@apollo.backplane.com>
Notes:
svn path=/head/; revision=140421
|
|
|
|
|
|
|
|
| |
e.g, CLOCK_REALTIME or CLOCK_MONOTONIC.
merge umtx_wait and umtx_timedwait into single function.
Notes:
svn path=/head/; revision=140245
|
|
|
|
| |
Notes:
svn path=/head/; revision=140110
|
|
|
|
|
|
|
|
|
| |
errno can be tampered potentially by nested signal handle.
Now all error codes are returned in negative value, positive value are
reserved for future expansion.
Notes:
svn path=/head/; revision=140102
|
|
|
|
| |
Notes:
svn path=/head/; revision=139899
|
|
|
|
| |
Notes:
svn path=/head/; revision=139804
|
|
|
|
|
|
|
|
| |
APIs expect ETIMEDOUT not EAGAIN, this simplifies userland code a
bit.
Notes:
svn path=/head/; revision=139751
|
|
|
|
|
|
|
|
|
|
| |
more general than previous. It also lets me implement cancelable point
in thread library. Also in theory, umtx_lock and umtx_unlock can
be implemented by using umtx_wait and umtx_wake, all atomic operations
can be done in userland without kernel's casuptr() function.
Notes:
svn path=/head/; revision=139427
|
|
|
|
|
|
|
| |
timespec pointer, every parameter will be interpreted by its opcode.
Notes:
svn path=/head/; revision=139292
|
|
|
|
|
|
|
|
| |
2. add const qualifier to umtx_timedlock and umtx_timedwait.
3. add missing blackets in umtx do_unlock_and_wait.
Notes:
svn path=/head/; revision=139291
|
|
|
|
|
|
|
|
| |
let umtx_lock returns EINTR when it returns ERESTART, this lets
userland have chance to back off mtx lock code when needed.
Notes:
svn path=/head/; revision=139258
|
|
|
|
|
|
|
|
| |
on SMP can explore the bug.
2. Let umtx_wake returns number of threads have been woken.
Notes:
svn path=/head/; revision=139257
|
|
|
|
|
|
|
| |
2. Eliminate a possible lock leak in timed wait loop.
Notes:
svn path=/head/; revision=139014
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
call mmap() to create a shared space, and then initialize umtx on it,
after that, each thread in different processes can use the umtx same
as threads in same process.
2. introduce a new syscall _umtx_op to support timed lock and condition
variable semantics. also, orignal umtx_lock and umtx_unlock inline
functions now are reimplemented by using _umtx_op, the _umtx_op can
use arbitrary id not just a thread id.
Notes:
svn path=/head/; revision=139013
|
|
|
|
| |
Notes:
svn path=/head/; revision=138225
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
lock collision.
2. Fix two race conditions. One is between _umtx_unlock and signal,
also a thread was marked TDF_UMTXWAKEUP by _umtx_unlock, it is
possible a signal delivered to the thread will cause msleep
returns EINTR, and the thread breaks out of loop, this causes
umtx ownership is not transfered to the thread. Another is in
_umtx_unlock itself, when the function sets the umtx to
UMTX_UNOWNED state, a new thread can come in and lock the umtx,
also the function tries to set contested bit flag, but it will
fail. Although the function will wake a blocked thread, if that
thread breaks out of loop by signal, no contested bit will be set.
Notes:
svn path=/head/; revision=138224
|
|
|
|
|
|
|
| |
need only obtain the process lock.
Notes:
svn path=/head/; revision=132039
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
pointer to the corresponding struct thread to the thread ID (lwpid_t)
assigned to that thread. The primary reason for this change is that
libthr now internally uses the same ID as the debugger and the kernel
when referencing to a kernel thread. This allows us to implement the
support for debugging without additional translations and/or mappings.
To preserve the ABI, the 1:1 threading syscalls, including the umtx
locking API have not been changed to work on a lwpid_t. Instead the
1:1 threading syscalls operate on long and the umtx locking API has
not been changed except for the contested bit. Previously this was
the least significant bit. Now it's the most significant bit. Since
the contested bit should not be tested by userland, this change is
not expected to be visible. Just to be sure, UMTX_CONTESTED has been
removed from <sys/umtx.h>.
Reviewed by: mtm@
ABI preservation tested on: i386, ia64
Notes:
svn path=/head/; revision=131431
|
|
|
|
| |
Notes:
svn path=/head/; revision=127483
|
|
|
|
|
|
|
| |
_umtx_unlock() instead of firing a KASSERT.
Notes:
svn path=/head/; revision=119836
|
|
|
|
|
|
|
|
| |
am not sure about that. The lack of -Werror and the inline noise hid
this for a while.
Notes:
svn path=/head/; revision=117938
|
|
|
|
|
|
|
|
|
|
| |
comes across it, it will turn into a core dump in userland instead of
a kernel panic. I had also inverted the sense of the test, so
Double pointy hat to: mtm
Notes:
svn path=/head/; revision=117778
|
|
|
|
| |
Notes:
svn path=/head/; revision=117745
|
|
|
|
|
|
|
|
|
| |
bit or not from lock to unlock time.
Suggested by: jhb
Notes:
svn path=/head/; revision=117743
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
1. There was a race condition between a thread unlocking
a umtx and the thread contesting it. If the unlocking
thread won the race it may try to wakeup a thread that
was not yet in msleep(). The contesting thread would then
go to sleep to await a wakeup that would never come. It's
not possible to close the race by using a lock because
calls to casuptr() may have to fault a page in from swap.
Instead, the race was closed by introducing a flag that
the unlocking thread will set when waking up a thread.
The contesting thread will check for this flag before
going to sleep. For now the flag is kept in td_flags,
but it may be better to use some other member or create
a new one because of the possible performance/contention
issues of having to own sched_lock. Thanks to jhb for
pointing me in the right direction on this one.
2. Once a umtx was contested all future locks and unlocks
were happening in the kernel, regardless of whether it
was contested or not. To prevent this from happening,
when a thread locks a umtx it checks the queue for that
umtx and unsets the contested bit if there are no other
threads waiting on it. Again, this is slightly more
complicated than it needs to be because we can't hold
a lock across casuptr(). So, the thread has to check
the queue again after unseting the bit, and reset the
contested bit if it finds that another thread has put
itself on the queue in the mean time.
3. Remove the if... block for unlocking an uncontested
umtx, and replace it with a KASSERT. The _only_ time
a thread should be unlocking a umtx in the kernel is
if it is contested.
Notes:
svn path=/head/; revision=117685
|
|
|
|
|
|
|
|
|
| |
notice another typo in the same line. This typo makes libthr unuseable,
but it's effects where counter-balanced by the extra semicolon, which
made libthr remarkably useable for the past several months.
Notes:
svn path=/head/; revision=117244
|
|
|
|
| |
Notes:
svn path=/head/; revision=117219
|
|
|
|
| |
Notes:
svn path=/head/; revision=116182
|
|
|
|
|
|
|
|
|
|
|
| |
- Use a hash of umtx queues to queue blocked threads. We hash on pid and the
virtual address of the umtx structure. This eliminates cases where we
previously held a lock across a casuptr call.
Reviwed by: jhb (quickly)
Notes:
svn path=/head/; revision=115765
|
|
|
|
|
|
|
|
|
|
| |
protecting the umtx queues. We can't use the proc lock because we need
to hold the lock across calls to casuptr, which can fault.
Approved by: re
Notes:
svn path=/head/; revision=115310
|
|
|
|
|
|
|
|
|
| |
and change the umtx code to expect this.
Reviewed by: jeff
Notes:
svn path=/head/; revision=112967
|
|
- umtx_lock() is defined as an inline in umtx.h. It tries to do an
uncontested acquire of a lock which falls back to the _umtx_lock()
system-call if that fails.
- umtx_unlock() is also an inline which falls back to _umtx_unlock() if the
uncontested unlock fails.
- Locks are keyed off of the thr_id_t of the currently running thread which
is currently just the pointer to the 'struct thread' in kernel.
- _umtx_lock() uses the proc pointer to synchronize access to blocked thread
queues which are stored in the first blocked thread.
Notes:
svn path=/head/; revision=112904
|