|
|
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 4 of 6
The preemptive multithreading model
The alternative to a cooperative model is a preemptive one, where some sort of timer is used by the operating system itself to cause a context swap. The interval between timer
ticks is called a time slice. Preemptive systems are less efficient than cooperative ones because the thread management must be done by the operating-system
kernel, but they're easier to program (with the exception of synchronization issues) and tend to be more reliable since starvation
is less of a problem. The most important advantage to preemptive systems is parallelism. Since cooperative threads are scheduled
by a user-level subroutine library, not by the OS, the best you can get with a cooperative model is concurrency. To get parallelism,
the OS must do the scheduling. Of course, four threads running in parallel will run much faster than the same four threads
running concurrently.
Some operating systems, like Windows 3.1, only support cooperative multithreading. Others, like NT, support only preemptive threading. (You can simulate cooperative threading in NT with a user-mode library like the "fiber" library, but fibers aren't fully integrated into the OS.) Solaris provides the best (or worst) of all worlds by supporting both cooperative and preemptive models in the same program.
The final OS issue has to do with the way in which kernel-level threads are mapped into user-mode processes. NT uses a one-to-one model, illustrated in the following picture.

NT user-mode threads effectively are kernel threads. They are mapped by the OS directly onto a processor and they are always preemptive. All thread manipulation and synchronization are done via kernel calls (with a 600-machine-cycle overhead for every call). This is a straightforward model, but is neither flexible nor efficient.
The Solaris model, pictured below, is more interesting. Solaris adds to the notion of a thread, the notion of a lightweight process (LWP). The LWP is a schedulable unit on which one or more threads can run. Parallel processing is done on the LWP level. Normally, LWPs reside in a pool, and they are assigned to particular processors as necessary. An LWP can be "bound" to a specific processor if it's doing something particularly time critical, however, thereby preventing other LWPs from using that processor.
Up at the user level, you have a system of cooperative, or "green," threads. In a simple situation, a process will have one LWP shared by all the green threads. The threads must yield control to each other voluntarily, but the single LWP the threads share can be preempted by an LWP in another process. This way the processes are preemptive with respect to each other (and can execute in parallel), but the threads within the process are cooperative (and execute concurrently).
A process isn't limited to a single LWP, however. The green threads can share a pool of LWPs in a single process. The green threads can be attached (or "bound") to an LWP in two ways: