|
|
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 2 of 5
Processing a Web request involves a mix of computation and I/O. The server must perform socket I/O to read the request and write the response, which can block due to network congestion or connectivity problems. It may also perform file I/O or make database requests, which can also block. In a single-threaded server, blocking not only delays completing the current request, but prevents pending requests from being processed at all. If one request blocks for an unusually long time, users might think the server is unavailable because it appears unresponsive. At the same time, resource utilization is poor, since the CPU sits idle while the single thread waits for its I/O to complete.
In server applications, sequential processing rarely provides either good throughput or good responsiveness. There are exceptions�such as when tasks are few and long-lived, or when the server serves a single client that makes only a single request at a time�but most server applications do not work this way. (In some situations, sequential processing may offer a simplicity or safety advantage; most GUI frameworks process tasks sequentially using a single thread.)
A more responsive approach is to create a new thread for servicing each request, as shown in ThreadPerTaskWebServer in Listing 2.
Listing 2. Web server that starts a new thread for each request
class ThreadPerTaskWebServer {
public static void main(String[] args) throws IOException {
ServerSocket socket = new ServerSocket(80);
while (true) {
finalSocket connection = socket.accept();
Runnable task = new Runnable() {
public void run() {
handleRequest(connection);}
};
new Thread(task).start();}
}
}
The ThreadPerTaskWebServer is similar in structure to the single-threaded version�the main thread still alternates between accepting an incoming connection
and dispatching the request. The difference is that for each connection, the main loop creates a new thread to process the
request instead of processing it within the main thread. This has three main consequences:
Under light to moderate load, the thread-per-task approach is an improvement over sequential execution. As long as the request arrival rate does not exceed the server's capacity to handle requests, this approach offers better responsiveness and throughput.
For production use, however, the thread-per-task approach has some practical drawbacks, especially when a large number of threads may be created:
Archived Discussions (Read only)