aboutsummaryrefslogtreecommitdiff
path: root/sys/sparc64
diff options
context:
space:
mode:
authorJohn Baldwin <jhb@FreeBSD.org>2004-07-02 20:21:44 +0000
committerJohn Baldwin <jhb@FreeBSD.org>2004-07-02 20:21:44 +0000
commit0c0b25ae91328c6b388ef5faa77ec9089f2950a7 (patch)
tree2a5d6a91ba98f5b9e075eecc1a9ca724b8a9110a /sys/sparc64
parent5a66986defa715403bf55b0c3534040cf1b87027 (diff)
downloadsrc-0c0b25ae91328c6b388ef5faa77ec9089f2950a7.tar.gz
src-0c0b25ae91328c6b388ef5faa77ec9089f2950a7.zip
Implement preemption of kernel threads natively in the scheduler rather
than as one-off hacks in various other parts of the kernel: - Add a function maybe_preempt() that is called from sched_add() to determine if a thread about to be added to a run queue should be preempted to directly. If it is not safe to preempt or if the new thread does not have a high enough priority, then the function returns false and sched_add() adds the thread to the run queue. If the thread should be preempted to but the current thread is in a nested critical section, then the flag TDF_OWEPREEMPT is set and the thread is added to the run queue. Otherwise, mi_switch() is called immediately and the thread is never added to the run queue since it is switch to directly. When exiting an outermost critical section, if TDF_OWEPREEMPT is set, then clear it and call mi_switch() to perform the deferred preemption. - Remove explicit preemption from ithread_schedule() as calling setrunqueue() now does all the correct work. This also removes the do_switch argument from ithread_schedule(). - Do not use the manual preemption code in mtx_unlock if the architecture supports native preemption. - Don't call mi_switch() in a loop during shutdown to give ithreads a chance to run if the architecture supports native preemption since the ithreads will just preempt DELAY(). - Don't call mi_switch() from the page zeroing idle thread for architectures that support native preemption as it is unnecessary. - Native preemption is enabled on the same archs that supported ithread preemption, namely alpha, i386, and amd64. This change should largely be a NOP for the default case as committed except that we will do fewer context switches in a few cases and will avoid the run queues completely when preempting. Approved by: scottl (with his re@ hat)
Notes
Notes: svn path=/head/; revision=131481
Diffstat (limited to 'sys/sparc64')
-rw-r--r--sys/sparc64/sparc64/intr_machdep.c4
1 files changed, 0 insertions, 4 deletions
diff --git a/sys/sparc64/sparc64/intr_machdep.c b/sys/sparc64/sparc64/intr_machdep.c
index 5fc0bd0dd129..c9ba8eaccfe9 100644
--- a/sys/sparc64/sparc64/intr_machdep.c
+++ b/sys/sparc64/sparc64/intr_machdep.c
@@ -230,11 +230,7 @@ sched_ithd(void *cookie)
int error;
iv = cookie;
-#ifdef notyet
error = ithread_schedule(iv->iv_ithd);
-#else
- error = ithread_schedule(iv->iv_ithd, 0);
-#endif
if (error == EINVAL)
intr_stray_vector(iv);
}