aboutsummaryrefslogtreecommitdiff
path: root/sys/kern/kern_idle.c
Commit message (Collapse)AuthorAgeFilesLines
* On multi-core, multi-threaded PPC systems, it is important that the threadsNathan Whitehorn2011-05-311-1/+1
| | | | | | | | | | | | | | | be brought up in the order they are enumerated in the device tree (in particular, that thread 0 on each core be brought up first). The SLIST through which we loop to start the CPUs has all of its entries added with SLIST_INSERT_HEAD(), which means it is in reverse order of enumeration and so AP startup would always fail in such situations (causing a machine check or RTAS failure). Fix this by changing the SLIST into an STAILQ, and inserting new CPUs at the end. Reviewed by: jhb Notes: svn path=/head/; revision=222531
* Split P_NOLOAD into a per-thread flag (TDF_NOLOAD).Attilio Rao2009-11-031-2/+1
| | | | | | | | | | | | | This improvements aims for avoiding further cache-misses in scheduler specific functions which need to keep track of average thread running time and further locking in places setting for this flag. Reported by: jeff (originally), kris (currently) Reviewed by: jhb Tested by: Giuseppe Cocomazzi <sbudella at email dot it> Notes: svn path=/head/; revision=198854
* In keeping with style(9)'s recommendations on macros, use a ';'Robert Watson2008-03-161-1/+1
| | | | | | | | | | | | after each SYSINIT() macro invocation. This makes a number of lightweight C parsers much happier with the FreeBSD kernel source, including cflow's prcc and lxr. MFC after: 1 month Discussed with: imp, rink Notes: svn path=/head/; revision=177253
* rename the process to 'idle' and 'intr' as per jhb.Julian Elischer2007-10-271-2/+2
| | | | Notes: svn path=/head/; revision=173051
* Initialise the initial process pointer to NULL so that we know we don'tJulian Elischer2007-10-271-1/+1
| | | | | | | | | | have an idle process yet. I'm guessing that on my system this was always 0 already. found by: Ed Schouten Notes: svn path=/head/; revision=173050
* oops, over optimised and broke non-SMP buildsJulian Elischer2007-10-261-1/+3
| | | | Notes: svn path=/head/; revision=173035
* Introduce a way to make pure kernal threads.Julian Elischer2007-10-261-9/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | kthread_add() takes the same parameters as the old kthread_create() plus a pointer to a process structure, and adds a kernel thread to that process. kproc_kthread_add() takes the parameters for kthread_add, plus a process name and a pointer to a pointer to a process instead of just a pointer, and if the proc * is NULL, it creates the process to the specifications required, before adding the thread to it. All other old kthread_xxx() calls return, but act on (struct thread *) instead of (struct proc *). One reason to change the name is so that any old kernel modules that are lying around and expect kthread_create() to make a process will not just accidentally link. fix top to show kernel threads by their thread name in -SH mode add a tdnam formatting option to ps to show thread names. make all idle threads actual kthreads and put them into their own idled process. make all interrupt threads kthreads and put them in an interd process (mainly for aesthetic and accounting reasons) rename proc 0 to be 'kernel' and it's swapper thread is now 'swapper' man page fixes to follow. Notes: svn path=/head/; revision=173004
* Rename the kthread_xxx (e.g. kthread_create()) callsJulian Elischer2007-10-201-3/+3
| | | | | | | | | | | | | | to kproc_xxx as they actually make whole processes. Thos makes way for us to add REAL kthread_create() and friends that actually make theads. it turns out that most of these calls actually end up being moved back to the thread version when it's added. but we need to make this cosmetic change first. I'd LOVE to do this rename in 7.0 so that we can eventually MFC the new kthread_xxx() calls. Notes: svn path=/head/; revision=172836
* Commit 14/14 of sched_lock decomposition.Jeff Roberson2007-06-051-2/+2
| | | | | | | | | | | | | | - Use thread_lock() rather than sched_lock for per-thread scheduling sychronization. - Use the per-process spinlock rather than the sched_lock for per-process scheduling synchronization. Tested by: kris, current@ Tested on: i386, amd64, ULE, 4BSD, libthr, libkse, PREEMPTION, etc. Discussed with: kris, attilio, kmacy, jhb, julian, bde (small parts each) Notes: svn path=/head/; revision=170307
* - Remove setrunqueue and replace it with direct calls to sched_add().Jeff Roberson2007-01-231-42/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | setrunqueue() was mostly empty. The few asserts and thread state setting were moved to the individual schedulers. sched_add() was chosen to displace it for naming consistency reasons. - Remove adjustrunqueue, it was 4 lines of code that was ifdef'd to be different on all three schedulers where it was only called in one place each. - Remove the long ifdef'd out remrunqueue code. - Remove the now redundant ts_state. Inspect the thread state directly. - Don't set TSF_* flags from kern_switch.c, we were only doing this to support a feature in one scheduler. - Change sched_choose() to return a thread rather than a td_sched. Also, rely on the schedulers to return the idlethread. This simplifies the logic in choosethread(). Aside from the run queue links kern_switch.c mostly does not care about the contents of td_sched. Discussed with: julian - Move the idle thread loop into the per scheduler area. ULE wants to do something different from the other schedulers. Suggested by: jhb Tested on: x86/amd64 sched_{4BSD, ULE, CORE}. Notes: svn path=/head/; revision=166188
* Threading cleanup.. part 2 of several.Julian Elischer2006-12-061-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | Make part of John Birrell's KSE patch permanent.. Specifically, remove: Any reference of the ksegrp structure. This feature was never fully utilised and made things overly complicated. All code in the scheduler that tried to make threaded programs fair to unthreaded programs. Libpthread processes will already do this to some extent and libthr processes already disable it. Also: Since this makes such a big change to the scheduler(s), take the opportunity to rename some structures and elements that had to be moved anyhow. This makes the code a lot more readable. The ULE scheduler compiles again but I have no idea if it works. The 4bsd scheduler still reqires a little cleaning and some functions that now do ALMOST nothing will go away, but I thought I'd do that as a separate commit. Tested by David Xu, and Dan Eischen using libthr and libpthread. Notes: svn path=/head/; revision=164936
* Use mi_switch, this should fix loadavg calculation problem in NO_KSE case.David Xu2006-11-121-5/+0
| | | | Notes: svn path=/head/; revision=164211
* Make KSE a kernel option, turned on by default in all GENERICJohn Birrell2006-10-261-0/+9
| | | | | | | | | | kernel configs except sun4v (which doesn't process signals properly with KSE). Reviewed by: davidxu@ Notes: svn path=/head/; revision=163709
* Divorce critical sections from spinlocks. Critical sections as denoted byJohn Baldwin2005-04-041-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | critical_enter() and critical_exit() are now solely a mechanism for deferring kernel preemptions. They no longer have any affect on interrupts. This means that standalone critical sections are now very cheap as they are simply unlocked integer increments and decrements for the common case. Spin mutexes now use a separate KPI implemented in MD code: spinlock_enter() and spinlock_exit(). This KPI is responsible for providing whatever MD guarantees are needed to ensure that a thread holding a spin lock won't be preempted by any other code that will try to lock the same lock. For now all archs continue to block interrupts in a "spinlock section" as they did formerly in all critical sections. Note that I've also taken this opportunity to push a few things into MD code rather than MI. For example, critical_fork_exit() no longer exists. Instead, MD code ensures that new threads have the correct state when they are created. Also, we no longer try to fixup the idlethreads for APs in MI code. Instead, each arch sets the initial curthread and adjusts the state of the idle thread it borrows in order to perform the initial context switch. This change is largely a big NOP, but the cleaner separation it provides will allow for more efficient alternative locking schemes in other parts of the kernel (bare critical sections rather than per-CPU spin mutexes for per-CPU data for example). Reviewed by: grehan, cognet, arch@, others Tested on: i386, alpha, sparc64, powerpc, arm, possibly more Notes: svn path=/head/; revision=144637
* Set the scheduling class of the idle threads to PRI_IDLE.Suleiman Souhlal2005-02-041-1/+2
| | | | | | | | | | | While there, set their priority with sched_prio() instead of changing it 'by hand'. Reviewed by: jhb Approved by: grehan (mentor) Notes: svn path=/head/; revision=141246
* Give the 4bsd scheduler the ability to wake up idle processorsJulian Elischer2004-09-011-0/+18
| | | | | | | | | when there is new work to be done. MFC after: 5 days Notes: svn path=/head/; revision=134591
* Expand the generic, but bogusly formed, copyright notice to includeWarner Losh2004-07-251-1/+21
| | | | | | | | | | | | | | the license from /usr/src/COPYRIGHT. Since cvs annotate shows that this was written by jasone, julian, jhb, peter, bmilekic and obrien. cvs log shows that many others may have contributed to this file. As such, go ahead and use the author of 'FreeBSD Project' for this file. If this is a problem, please notify me. # this eliminates the last file in the kernel with an indirect reference # to /usr/src/COPYRIGHT in the kernel. A few more in userland remain. Notes: svn path=/head/; revision=132637
* - Change mi_switch() and sched_switch() to accept an optional thread toJohn Baldwin2004-07-021-3/+2
| | | | | | | | | | | | | | | | switch to. If a non-NULL thread pointer is passed in, then the CPU will switch to that thread directly rather than calling choosethread() to pick a thread to choose to. - Make sched_switch() aware of idle threads and know to do TD_SET_CAN_RUN() instead of sticking them on the run queue rather than requiring all callers of mi_switch() to know to do this if they can be called from an idlethread. - Move constants for arguments to mi_switch() and thread_single() out of the middle of the function prototypes and up above into their own section. Notes: svn path=/head/; revision=131473
* Adjust the priority of the idle threads to be the lowest possibleJohn Baldwin2004-06-281-0/+1
| | | | | | | | | | priority. This is just a comestic nit as the idle thread priorities aren't used by the schedulers. Reported by: bde Notes: svn path=/head/; revision=131243
* Always set a process' state to normal when it is fully constructed inJohn Baldwin2004-02-051-1/+0
| | | | | | | | fork1() rather than only doing it for the RFSTOPPED case and then having to fix it up in other places later on. Notes: svn path=/head/; revision=125496
* - Add a flags parameter to mi_switch. The value of flags may be SW_VOL orJeff Roberson2004-01-251-2/+1
| | | | | | | | | | | | | SW_INVOL. Assert that one of these is set in mi_switch() and propery adjust the rusage statistics. This is to simplify the large number of users of this interface which were previously all required to adjust the proper counter prior to calling mi_switch(). This also facilitates more switch and locking optimizations. - Change all callers of mi_switch() to pass the appropriate paramter and remove direct references to the process statistics. Notes: svn path=/head/; revision=124944
* Tidy up loose ends in the idle process. Call the MI cpu_idle() functionPeter Wemm2003-10-191-37/+5
| | | | | | | | | | | for all platforms now. XXX alpha/sparc64/powerpc should fill in the function. Submitted by: bde Notes: svn path=/head/; revision=121238
* Halt the cpu on amd64 as well. For some strange reason, this makesPeter Wemm2003-10-171-1/+1
| | | | | | | | | a fair bit of difference to the power consumption and lets my cpu cool down enough for the temperature sensitive fan controller to completely stop the cpu fan at times. Notes: svn path=/head/; revision=121149
* Implement cpu_idle() on ia64. We put the processor in a lightweightMarcel Moolenaar2003-10-171-1/+1
| | | | | | | | | halt state that minimizes power consumption while still preserving cache and TLB coherency. Halting the processor is not conditional at this time. Tested with UP and SMP kernels. Notes: svn path=/head/; revision=121148
* Use __FBSDID().David E. O'Brien2003-06-111-1/+3
| | | | Notes: svn path=/head/; revision=116182
* Move the flag that indicates an idle thread from the KSE to the thread.Julian Elischer2003-05-021-1/+1
| | | | | | | | | It was always referenced via the thread anyhow. Reviewed by: jhb (a LOOOOONG time ago) Notes: svn path=/head/; revision=114471
* Add some locking in for a few proc and thread fields.John Baldwin2003-04-171-1/+5
| | | | Notes: svn path=/head/; revision=113629
* - Create a new scheduler api that is defined in sys/sched.hJeff Roberson2002-10-121-2/+3
| | | | | | | | | | | | | - Begin moving scheduler specific functionality into sched_4bsd.c - Replace direct manipulation of scheduler data with hooks provided by the new api. - Remove KSE specific state modifications and single runq assumptions from kern_switch.c Reviewed by: -arch Notes: svn path=/head/; revision=104964
* Some kernel threads try to do significant work, and the default KSTACK_PAGESScott Long2002-10-021-2/+2
| | | | | | | | | | | | | | | | doesn't give them enough stack to do much before blowing away the pcb. This adds MI and MD code to allow the allocation of an alternate kstack who's size can be speficied when calling kthread_create. Passing the value 0 prevents the alternate kstack from being created. Note that the ia64 MD code is missing for now, and PowerPC was only partially written due to the pmap.c being incomplete there. Though this patch does not modify anything to make use of the alternate kstack, acpi and usb are good candidates. Reviewed by: jake, peter, jhb Notes: svn path=/head/; revision=104354
* Completely redo thread states.Julian Elischer2002-09-111-2/+2
| | | | | | | Reviewed by: davidxu@freebsd.org Notes: svn path=/head/; revision=103216
* Slight cleanup of some comments/whitespace.Julian Elischer2002-08-011-1/+2
| | | | | | | | | | | | | | | Make idle process state more consistant. Add an assert on thread state. Clean up idleproc/mi_switch() interaction. Use a local instead of referencing curthread 7 times in a row (I've been told curthread can be expensive on some architectures) Remove some commented out code. Add a little commented out code (completion coming soon) Reviewed by: jhb@freebsd.org Notes: svn path=/head/; revision=101176
* Make sure the process state for the idle proc is set correctlyJulian Elischer2002-07-171-0/+1
| | | | | | | from the beginning. Notes: svn path=/head/; revision=100261
* Thinking about it I came to the conclusion that the KSE states were incorrectlyJulian Elischer2002-07-141-5/+1
| | | | | | | | | | | | | | | | | formulated. The correct states should be: IDLE: On the idle KSE list for that KSEG RUNQ: Linked onto the system run queue. THREAD: Attached to a thread and slaved to whatever state the thread is in. This means that most places where we were adjusting kse state can go away as it is just moving around because the thread is.. The only places we need to adjust the KSE state is in transition to and from the idle and run queues. Reviewed by: jhb@freebsd.org Notes: svn path=/head/; revision=99942
* Part 1 of KSE-IIIJulian Elischer2002-06-291-4/+15
| | | | | | | | | | | | | | | | The ability to schedule multiple threads per process (one one cpu) by making ALL system calls optionally asynchronous. to come: ia64 and power-pc patches, patches for gdb, test program (in tools) Reviewed by: Almost everyone who counts (at various times, peter, jhb, matt, alfred, mini, bernd, and a cast of thousands) NOTE: this is still Beta code, and contains lots of debugging stuff. expect slight instability in signals.. Notes: svn path=/head/; revision=99072
* Pre-KSE/M3 commit.Julian Elischer2002-02-071-2/+2
| | | | | | | | | | | | | this is a low-functionality change that changes the kernel to access the main thread of a process via the linked list of threads rather than assuming that it is embedded in the process. It IS still embeded there but remove all teh code that assumes that in preparation for the next commit which will actually move it out. Reviewed by: peter@freebsd.org, gallatin@cs.duke.edu, benno rice, Notes: svn path=/head/; revision=90361
* Modify the critical section API as follows:John Baldwin2001-12-181-1/+3
| | | | | | | | | | | | | | | | | | | | | | - The MD functions critical_enter/exit are renamed to start with a cpu_ prefix. - MI wrapper functions critical_enter/exit maintain a per-thread nesting count and a per-thread critical section saved state set when entering a critical section while at nesting level 0 and restored when exiting to nesting level 0. This moves the saved state out of spin mutexes so that interlocking spin mutexes works properly. - Most low-level MD code that used critical_enter/exit now use cpu_critical_enter/exit. MI code such as device drivers and spin mutexes use the MI wrappers. Note that since the MI wrappers store the state in the current thread, they do not have any return values or arguments. - mtx_intr_enable() is replaced with a constant CRITICAL_FORK which is assigned to curthread->td_savecrit during fork_exit(). Tested on: i386, alpha Notes: svn path=/head/; revision=88088
* Overhaul the per-CPU support a bit:John Baldwin2001-12-111-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - The MI portions of struct globaldata have been consolidated into a MI struct pcpu. The MD per-CPU data are specified via a macro defined in machine/pcpu.h. A macro was chosen over a struct mdpcpu so that the interface would be cleaner (PCPU_GET(my_md_field) vs. PCPU_GET(md.md_my_md_field)). - All references to globaldata are changed to pcpu instead. In a UP kernel, this data was stored as global variables which is where the original name came from. In an SMP world this data is per-CPU and ideally private to each CPU outside of the context of debuggers. This also included combining machine/globaldata.h and machine/globals.h into machine/pcpu.h. - The pointer to the thread using the FPU on i386 was renamed from npxthread to fpcurthread to be identical with other architectures. - Make the show pcpu ddb command MI with a MD callout to display MD fields. - The globaldata_register() function was renamed to pcpu_init() and now init's MI fields of a struct pcpu in addition to registering it with the internal array and list. - A pcpu_destroy() function was added to remove a struct pcpu from the internal array and list. Tested on: alpha, i386 Reviewed by: peter, jake Notes: svn path=/head/; revision=87702
* KSE Milestone 2Julian Elischer2001-09-121-4/+4
| | | | | | | | | | | | | | | | | Note ALL MODULES MUST BE RECOMPILED make the kernel aware that there are smaller units of scheduling than the process. (but only allow one thread per process at this time). This is functionally equivalent to teh previousl -current except that there is a thread associated with each process. Sorry john! (your next MFC will be a doosie!) Reviewed by: peter@freebsd.org, dillon@freebsd.org X-MFC after: ha ha ha ha Notes: svn path=/head/; revision=83366
* Remove #if 0'd remnants of the old idle page zeroing.John Baldwin2001-09-011-9/+0
| | | | Notes: svn path=/head/; revision=82757
* - Split out the support for per-CPU data from the SMP code. UP kernelsJohn Baldwin2001-05-101-14/+5
| | | | | | | | | | have per-CPU data and gdb on the i386 at least needs access to it. - Clean up includes in kern_idle.c and subr_smp.c. Reviewed by: jake Notes: svn path=/head/; revision=76440
* Undo part of the tangle of having sys/lock.h and sys/mutex.h included inMark Murray2001-05-011-2/+3
| | | | | | | | | | | | | | other "system" header files. Also help the deprecation of lockmgr.h by making it a sub-include of sys/lock.h and removing sys/lockmgr.h form kernel .c files. Sort sys/*.h includes where possible in affected files. OK'ed by: bde (with reservations) Notes: svn path=/head/; revision=76166
* Overhaul of the SMP code. Several portions of the SMP kernel support haveJohn Baldwin2001-04-271-11/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | been made machine independent and various other adjustments have been made to support Alpha SMP. - It splits the per-process portions of hardclock() and statclock() off into hardclock_process() and statclock_process() respectively. hardclock() and statclock() call the *_process() functions for the current process so that UP systems will run as before. For SMP systems, it is simply necessary to ensure that all other processors execute the *_process() functions when the main clock functions are triggered on one CPU by an interrupt. For the alpha 4100, clock interrupts are delievered in a staggered broadcast fashion, so we simply call hardclock/statclock on the boot CPU and call the *_process() functions on the secondaries. For x86, we call statclock and hardclock as usual and then call forward_hardclock/statclock in the MD code to send an IPI to cause the AP's to execute forwared_hardclock/statclock which then call the *_process() functions. - forward_signal() and forward_roundrobin() have been reworked to be MI and to involve less hackery. Now the cpu doing the forward sets any flags, etc. and sends a very simple IPI_AST to the other cpu(s). AST IPIs now just basically return so that they can execute ast() and don't bother with setting the astpending or needresched flags themselves. This also removes the loop in forward_signal() as sched_lock closes the race condition that the loop worked around. - need_resched(), resched_wanted() and clear_resched() have been changed to take a process to act on rather than assuming curproc so that they can be used to implement forward_roundrobin() as described above. - Various other SMP variables have been moved to a MI subr_smp.c and a new header sys/smp.h declares MI SMP variables and API's. The IPI API's from machine/ipl.h have moved to machine/smp.h which is included by sys/smp.h. - The globaldata_register() and globaldata_find() functions as well as the SLIST of globaldata structures has become MI and moved into subr_smp.c. Also, the globaldata list is only available if SMP support is compiled in. Reviewed by: jake, peter Looked over by: eivind Notes: svn path=/head/; revision=76078
* Implement a unified run queue and adjust priority levels accordingly.Jake Burkholder2001-02-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - All processes go into the same array of queues, with different scheduling classes using different portions of the array. This allows user processes to have their priorities propogated up into interrupt thread range if need be. - I chose 64 run queues as an arbitrary number that is greater than 32. We used to have 4 separate arrays of 32 queues each, so this may not be optimal. The new run queue code was written with this in mind; changing the number of run queues only requires changing constants in runq.h and adjusting the priority levels. - The new run queue code takes the run queue as a parameter. This is intended to be used to create per-cpu run queues. Implement wrappers for compatibility with the old interface which pass in the global run queue structure. - Group the priority level, user priority, native priority (before propogation) and the scheduling class into a struct priority. - Change any hard coded priority levels that I found to use symbolic constants (TTIPRI and TTOPRI). - Remove the curpriority global variable and use that of curproc. This was used to detect when a process' priority had lowered and it should yield. We now effectively yield on every interrupt. - Activate propogate_priority(). It should now have the desired effect without needing to also propogate the scheduling class. - Temporarily comment out the call to vm_page_zero_idle() in the idle loop. It interfered with propogate_priority() because the idle process needed to do a non-blocking acquire of Giant and then other processes would try to propogate their priority onto it. The idle process should not do anything except idle. vm_page_zero_idle() will return in the form of an idle priority kernel thread which is woken up at apprioriate times by the vm system. - Update struct kinfo_proc to the new priority interface. Deliberately change its size by adjusting the spare fields. It remained the same size, but the layout has changed, so userland processes that use it would parse the data incorrectly. The size constraint should really be changed to an arbitrary version number. Also add a debug.sizeof sysctl node for struct kinfo_proc. Notes: svn path=/head/; revision=72376
* - Point out that we don't lock anything during the idle setup becauseJohn Baldwin2001-02-091-1/+6
| | | | | | | | | | only the boot processor should be running in the comments. - Initialize curproc to point to each CPU's respective idleproc if their curproc is NULL. - Keep track of the number of context switches performed by idleproc. Notes: svn path=/head/; revision=72222
* Change and clean the mutex lock interface.Bosko Milekic2001-02-091-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mtx_enter(lock, type) becomes: mtx_lock(lock) for sleep locks (MTX_DEF-initialized locks) mtx_lock_spin(lock) for spin locks (MTX_SPIN-initialized) similarily, for releasing a lock, we now have: mtx_unlock(lock) for MTX_DEF and mtx_unlock_spin(lock) for MTX_SPIN. We change the caller interface for the two different types of locks because the semantics are entirely different for each case, and this makes it explicitly clear and, at the same time, it rids us of the extra `type' argument. The enter->lock and exit->unlock change has been made with the idea that we're "locking data" and not "entering locked code" in mind. Further, remove all additional "flags" previously passed to the lock acquire/release routines with the exception of two: MTX_QUIET and MTX_NOSWITCH The functionality of these flags is preserved and they can be passed to the lock/unlock routines by calling the corresponding wrappers: mtx_{lock, unlock}_flags(lock, flag(s)) and mtx_{lock, unlock}_spin_flags(lock, flag(s)) for MTX_DEF and MTX_SPIN locks, respectively. Re-inline some lock acq/rel code; in the sleep lock case, we only inline the _obtain_lock()s in order to ensure that the inlined code fits into a cache line. In the spin lock case, we inline recursion and actually only perform a function call if we need to spin. This change has been made with the idea that we generally tend to avoid spin locks and that also the spin locks that we do have and are heavily used (i.e. sched_lock) do recurse, and therefore in an effort to reduce function call overhead for some architectures (such as alpha), we inline recursion for this case. Create a new malloc type for the witness code and retire from using the M_DEV type. The new type is called M_WITNESS and is only declared if WITNESS is enabled. Begin cleaning up some machdep/mutex.h code - specifically updated the "optimized" inlined code in alpha/mutex.h and wrote MTX_LOCK_SPIN and MTX_UNLOCK_SPIN asm macros for the i386/mutex.h as we presently need those. Finally, caught up to the interface changes in all sys code. Contributors: jake, jhb, jasone (in no particular order) Notes: svn path=/head/; revision=72200
* Catch up to moving headers:John Baldwin2000-10-201-1/+1
| | | | | | | | - machine/ipl.h -> sys/ipl.h - machine/mutex.h -> sys/mutex.h Notes: svn path=/head/; revision=67365
* Axe the idle_event eventhandler, and add a MD cpu_idle function usedJohn Baldwin2000-10-191-4/+3
| | | | | | | | | for things such as halting CPU's, idling CPU's, etc. Discussed with: msmith Notes: svn path=/head/; revision=67308
* EVENTHANDLER_INVOKE() takes two arguments.Peter Wemm2000-10-181-1/+1
| | | | Notes: svn path=/head/; revision=67297
* Don't needlessly pass the diagnostic counter to the idle_event eventJohn Baldwin2000-10-181-1/+1
| | | | | | | handlers. Notes: svn path=/head/; revision=67280
* - Wrap the sanity checks for staying in the idle loop for absurdly longJohn Baldwin2000-10-171-6/+12
| | | | | | | | amounts of time in #ifdef DIAGNOSTIC - Call vm_page_zero_idle() during the idle loop. Notes: svn path=/head/; revision=67266