aboutsummaryrefslogtreecommitdiff
path: root/sys/kern/kern_umtx.c
Commit message (Collapse)AuthorAgeFilesLines
* Allocate umtx_q from heap instead of stack, this avoidsDavid Xu2005-03-051-34/+46
| | | | | | | page fault panic in kernel under heavy swapping. Notes: svn path=/head/; revision=143149
* Revert my previous errno hack, that is certainly an issue,David Xu2005-01-181-2/+1
| | | | | | | | | | | and always has been, but the system call itself returns errno in a register so the problem is really a function of libc, not the system call. Discussed with : Matthew Dillion <dillon@apollo.backplane.com> Notes: svn path=/head/; revision=140421
* make umtx timeout relative so userland can select different clock type,David Xu2005-01-141-46/+51
| | | | | | | | e.g, CLOCK_REALTIME or CLOCK_MONOTONIC. merge umtx_wait and umtx_timedwait into single function. Notes: svn path=/head/; revision=140245
* Comment out debugging printf which doesn't compile on amd64.Poul-Henning Kamp2005-01-121-0/+2
| | | | Notes: svn path=/head/; revision=140110
* Let _umtx_op directly return error code rather than from errno becauseDavid Xu2005-01-121-12/+23
| | | | | | | | | errno can be tampered potentially by nested signal handle. Now all error codes are returned in negative value, positive value are reserved for future expansion. Notes: svn path=/head/; revision=140102
* Break out of loop earlier if it is not timeout.David Xu2005-01-081-1/+1
| | | | Notes: svn path=/head/; revision=139899
* /* -> /*- for copyright notices, minor format tweaks as necessaryWarner Losh2005-01-061-1/+1
| | | | Notes: svn path=/head/; revision=139804
* Return ETIMEDOUT when thread is timeouted since POSIX threadDavid Xu2005-01-061-5/+7
| | | | | | | | APIs expect ETIMEDOUT not EAGAIN, this simplifies userland code a bit. Notes: svn path=/head/; revision=139751
* Make umtx_wait and umtx_wake more like linux futex does, it isDavid Xu2004-12-301-41/+9
| | | | | | | | | | more general than previous. It also lets me implement cancelable point in thread library. Also in theory, umtx_lock and umtx_unlock can be implemented by using umtx_wait and umtx_wake, all atomic operations can be done in userland without kernel's casuptr() function. Notes: svn path=/head/; revision=139427
* Make _umtx_op() as more general interface, the final parameter needn't beDavid Xu2004-12-251-4/+4
| | | | | | | timespec pointer, every parameter will be interpreted by its opcode. Notes: svn path=/head/; revision=139292
* 1. introduce umtx_owner to get an owner of a umtx.David Xu2004-12-251-3/+1
| | | | | | | | 2. add const qualifier to umtx_timedlock and umtx_timedwait. 3. add missing blackets in umtx do_unlock_and_wait. Notes: svn path=/head/; revision=139291
* Add umtxq_lock/unlock around umtx_signal, fix debug kernel compiling,David Xu2004-12-241-5/+9
| | | | | | | | let umtx_lock returns EINTR when it returns ERESTART, this lets userland have chance to back off mtx lock code when needed. Notes: svn path=/head/; revision=139258
* 1. Fix race condition between umtx lock and unlock, heavy testingDavid Xu2004-12-241-133/+104
| | | | | | | | on SMP can explore the bug. 2. Let umtx_wake returns number of threads have been woken. Notes: svn path=/head/; revision=139257
* 1. msleep returns EWOULDBLOCK not ETIMEDOUT, use EWOULDBLOCK instead.David Xu2004-12-181-8/+6
| | | | | | | 2. Eliminate a possible lock leak in timed wait loop. Notes: svn path=/head/; revision=139014
* 1. make umtx sharable between processes, the way is two or more processesDavid Xu2004-12-181-170/+544
| | | | | | | | | | | | | call mmap() to create a shared space, and then initialize umtx on it, after that, each thread in different processes can use the umtx same as threads in same process. 2. introduce a new syscall _umtx_op to support timed lock and condition variable semantics. also, orignal umtx_lock and umtx_unlock inline functions now are reimplemented by using _umtx_op, the _umtx_op can use arbitrary id not just a thread id. Notes: svn path=/head/; revision=139013
* Forgot to inline umtxq_unlock.David Xu2004-11-301-1/+1
| | | | Notes: svn path=/head/; revision=138225
* 1. use per-chain mutex instead of global mutex to reduceDavid Xu2004-11-301-115/+212
| | | | | | | | | | | | | | | | | lock collision. 2. Fix two race conditions. One is between _umtx_unlock and signal, also a thread was marked TDF_UMTXWAKEUP by _umtx_unlock, it is possible a signal delivered to the thread will cause msleep returns EINTR, and the thread breaks out of loop, this causes umtx ownership is not transfered to the thread. Another is in _umtx_unlock itself, when the function sets the umtx to UMTX_UNOWNED state, a new thread can come in and lock the umtx, also the function tries to set contested bit flag, but it will fail. Although the function will wake a blocked thread, if that thread breaks out of loop by signal, no contested bit will be set. Notes: svn path=/head/; revision=138224
* writers must hold both sched_lock and the process lock; therefore, readersMike Makonnen2004-07-121-5/+3
| | | | | | | need only obtain the process lock. Notes: svn path=/head/; revision=132039
* Change the thread ID (thr_id_t) used for 1:1 threading from being aMarcel Moolenaar2004-07-021-3/+5
| | | | | | | | | | | | | | | | | | | | | | | pointer to the corresponding struct thread to the thread ID (lwpid_t) assigned to that thread. The primary reason for this change is that libthr now internally uses the same ID as the debugger and the kernel when referencing to a kernel thread. This allows us to implement the support for debugging without additional translations and/or mappings. To preserve the ABI, the 1:1 threading syscalls, including the umtx locking API have not been changed to work on a lwpid_t. Instead the 1:1 threading syscalls operate on long and the umtx locking API has not been changed except for the contested bit. Previously this was the least significant bit. Now it's the most significant bit. Since the contested bit should not be tested by userland, this change is not expected to be visible. Just to be sure, UMTX_CONTESTED has been removed from <sys/umtx.h>. Reviewed by: mtm@ ABI preservation tested on: i386, ia64 Notes: svn path=/head/; revision=131431
* Use the proc lock to sleep on a libthr umtx.Mike Makonnen2004-03-271-2/+6
| | | | Notes: svn path=/head/; revision=127483
* Return EINVAL if the contested bit is not set on the umtx passed toTim J. Robbins2003-09-071-1/+2
| | | | | | | _umtx_unlock() instead of firing a KASSERT. Notes: svn path=/head/; revision=119836
* Initialize 'blocked' to NULL. I think this was a real problem, but IPeter Wemm2003-07-231-0/+1
| | | | | | | | am not sure about that. The lack of -Werror and the inline noise hid this for a while. Notes: svn path=/head/; revision=117938
* Turn a KASSERT back into an EINVAL return value. So, next time someoneMike Makonnen2003-07-191-2/+4
| | | | | | | | | | comes across it, it will turn into a core dump in userland instead of a kernel panic. I had also inverted the sense of the test, so Double pointy hat to: mtm Notes: svn path=/head/; revision=117778
* Remove a lock held across casuptr() that snuck in last commit.Mike Makonnen2003-07-181-2/+5
| | | | Notes: svn path=/head/; revision=117745
* Move the decision on whether to unset the contestedMike Makonnen2003-07-181-48/+40
| | | | | | | | | bit or not from lock to unlock time. Suggested by: jhb Notes: svn path=/head/; revision=117743
* Fix umtx locking, for libthr, in the kernel.Mike Makonnen2003-07-171-24/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1. There was a race condition between a thread unlocking a umtx and the thread contesting it. If the unlocking thread won the race it may try to wakeup a thread that was not yet in msleep(). The contesting thread would then go to sleep to await a wakeup that would never come. It's not possible to close the race by using a lock because calls to casuptr() may have to fault a page in from swap. Instead, the race was closed by introducing a flag that the unlocking thread will set when waking up a thread. The contesting thread will check for this flag before going to sleep. For now the flag is kept in td_flags, but it may be better to use some other member or create a new one because of the possible performance/contention issues of having to own sched_lock. Thanks to jhb for pointing me in the right direction on this one. 2. Once a umtx was contested all future locks and unlocks were happening in the kernel, regardless of whether it was contested or not. To prevent this from happening, when a thread locks a umtx it checks the queue for that umtx and unsets the contested bit if there are no other threads waiting on it. Again, this is slightly more complicated than it needs to be because we can't hold a lock across casuptr(). So, the thread has to check the queue again after unseting the bit, and reset the contested bit if it finds that another thread has put itself on the queue in the mean time. 3. Remove the if... block for unlocking an uncontested umtx, and replace it with a KASSERT. The _only_ time a thread should be unlocking a umtx in the kernel is if it is contested. Notes: svn path=/head/; revision=117685
* I was so happy I found the semi-colon from hell that I didn'tMike Makonnen2003-07-041-1/+1
| | | | | | | | | notice another typo in the same line. This typo makes libthr unuseable, but it's effects where counter-balanced by the extra semicolon, which made libthr remarkably useable for the past several months. Notes: svn path=/head/; revision=117244
* It's unfair how one extraneous semi-colon can cause so much grief.Mike Makonnen2003-07-041-1/+1
| | | | Notes: svn path=/head/; revision=117219
* Use __FBSDID().David E. O'Brien2003-06-111-3/+3
| | | | Notes: svn path=/head/; revision=116182
* - Remove the blocked pointer from the umtx structure.Jeff Roberson2003-06-031-171/+163
| | | | | | | | | | | - Use a hash of umtx queues to queue blocked threads. We hash on pid and the virtual address of the umtx structure. This eliminates cases where we previously held a lock across a casuptr call. Reviwed by: jhb (quickly) Notes: svn path=/head/; revision=115765
* - Create a new lock, umtx_lock, for use instead of the proc lock forJeff Roberson2003-05-251-6/+13
| | | | | | | | | | protecting the umtx queues. We can't use the proc lock because we need to hold the lock across calls to casuptr, which can fault. Approved by: re Notes: svn path=/head/; revision=115310
* - Make casuptr return the old value of the location we're trying to update,Jake Burkholder2003-04-021-10/+13
| | | | | | | | | and change the umtx code to expect this. Reviewed by: jeff Notes: svn path=/head/; revision=112967
* - Add an api for doing smp safe locks in userland.Jeff Roberson2003-04-011-0/+303
- umtx_lock() is defined as an inline in umtx.h. It tries to do an uncontested acquire of a lock which falls back to the _umtx_lock() system-call if that fails. - umtx_unlock() is also an inline which falls back to _umtx_unlock() if the uncontested unlock fails. - Locks are keyed off of the thr_id_t of the currently running thread which is currently just the pointer to the 'struct thread' in kernel. - _umtx_lock() uses the proc pointer to synchronize access to blocked thread queues which are stored in the first blocked thread. Notes: svn path=/head/; revision=112904