Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

Sponsored Links

Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs

Book excerpt: Executing tasks in threads

When creating threads to perform tasks, look to the Executor framework

  • Print
  • Feedback

Page 2 of 5

Processing a Web request involves a mix of computation and I/O. The server must perform socket I/O to read the request and write the response, which can block due to network congestion or connectivity problems. It may also perform file I/O or make database requests, which can also block. In a single-threaded server, blocking not only delays completing the current request, but prevents pending requests from being processed at all. If one request blocks for an unusually long time, users might think the server is unavailable because it appears unresponsive. At the same time, resource utilization is poor, since the CPU sits idle while the single thread waits for its I/O to complete.

In server applications, sequential processing rarely provides either good throughput or good responsiveness. There are exceptions�such as when tasks are few and long-lived, or when the server serves a single client that makes only a single request at a time�but most server applications do not work this way. (In some situations, sequential processing may offer a simplicity or safety advantage; most GUI frameworks process tasks sequentially using a single thread.)

Explicitly creating threads for tasks

A more responsive approach is to create a new thread for servicing each request, as shown in ThreadPerTaskWebServer in Listing 2.

Listing 2. Web server that starts a new thread for each request

                        class ThreadPerTaskWebServer {
   public static void main(String[] args) throws IOException {
      ServerSocket socket = new ServerSocket(80);
      while (true) {
                            finalSocket connection = socket.accept();
         Runnable task = new Runnable() {
               public void run() {
                            handleRequest(connection);}
            };
                            new Thread(task).start();}
   }
}
                   


The ThreadPerTaskWebServer is similar in structure to the single-threaded version�the main thread still alternates between accepting an incoming connection and dispatching the request. The difference is that for each connection, the main loop creates a new thread to process the request instead of processing it within the main thread. This has three main consequences:

  • Task processing is offloaded from the main thread, enabling the main loop to resume waiting for the next incoming connection more quickly. This enables new connections to be accepted before previous requests complete, improving responsiveness.
  • Tasks can be processed in parallel, enabling multiple requests to be serviced simultaneously. This may improve throughput if there are multiple processors, or if tasks need to block for any reason such as I/O completion, lock acquisition, or resource availability.
  • Task-handling code must be thread-safe, because it may be invoked concurrently for multiple tasks.


Under light to moderate load, the thread-per-task approach is an improvement over sequential execution. As long as the request arrival rate does not exceed the server's capacity to handle requests, this approach offers better responsiveness and throughput.

Disadvantages of unbounded thread creation

For production use, however, the thread-per-task approach has some practical drawbacks, especially when a large number of threads may be created:

  • Print
  • Feedback

Resources