Recommended: Sing it, brah! 5 fabulous songs for developers
JW's Top 5
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 3 of 6

Windows NT's priority architecture
The columns are actual priority levels, only 22 of which must be shared by all applications. (The others are used by NT itself.) The rows are priority classes. The threads running in a process pegged at the idle priority class are running at levels 1 through 6 and 15, depending on their assigned logical priority level. The threads of a process pegged as normal priority class will run at levels 1, 6 through 10, or 15 if the process doesn't have the input focus. If it does have the input focus, the threads run at levels 1, 7 through 11, or 15. This means that a high-priority thread of an idle priority class process can preempt a low-priority thread of a normal priority class process, but only if that process is running in the background. Notice that a process running in the "high" priority class only has six priority levels available to it. The other classes have seven.
NT provides no way to limit the priority class of a process. Any thread on any process on the machine can take over control of the box at any time by boosting its own priority class; there is no defense against this.
The technical term I use to describe NT's priority is unholy mess. In practice, priority is virtually worthless under NT.
So what's a programmer to do? Between NT's limited number of priority levels and it's uncontrollable priority boosting, there's
no absolutely safe way for a Java program to use priority levels for scheduling. One workable compromise is to restrict yourself
to Thread.MAX_PRIORITY, Thread.MIN_PRIORITY, and Thread.NORM_PRIORITY when you call setPriority(). This restriction at least avoids the 10-levels-mapped-to-7-levels problem. I suppose you could use the os.name system property to detect NT, and then call a native method to turn off priority boosting, but that won't work if your app
is running under Internet Explorer unless you also use Sun's VM plug-in. (Microsoft's VM uses a nonstandard native-method
implementation.) In any event, I hate to use native methods. I usually avoid the problem as much as possible by putting most
threads at NORM_PRIORITY and using scheduling mechanisms other than priority. (I'll discuss some of these in future installments of this series.)
There are typically two threading models supported by operating systems: cooperative and preemptive.
The cooperative multithreading model
In a cooperative system, a thread retains control of its processor until it decides to give it up (which might be never). The various threads
have to cooperate with each other or all but one of the threads will be "starved" (meaning, never given a chance to run).
Scheduling in most cooperative systems is done strictly by priority level. When the current thread gives up control, the highest-priority
waiting thread gets control. (An exception to this rule is Windows 3.x, which uses a cooperative model but doesn't have much
of a scheduler. The window that has the focus gets control.)
The main advantage of cooperative multithreading is that it's very fast and has a very low overhead. For example, a context swap -- a transfer of control from one thread to another -- can be performed entirely by a user-mode subroutine library without
entering the OS kernel. (In NT, which is something of a worst-case, entering the kernel wastes 600 machine cycles. A user-mode
context swap in a cooperative system does little more than a C setjump/longjump call would do.) You can have thousands of threads in your applications significantly impacting performance. Since you don't
lose control involuntarily in cooperative systems, you don't have to worry about synchronization either. That is, you never
have to worry about an atomic operation being interrupted. The main disadvantage of the cooperative model is that it's very
difficult to program cooperative systems. Lengthy operations have to be manually divided into smaller chunks, which often
must interact in complex ways.