Newsletter sign-up
View all newsletters

Enterprise Java Newsletter
Stay up to date on the latest tutorials and Java community news posted on JavaWorld

Sponsored Links

Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs

Server load balancing architectures, Part 1: Transport-level load balancing

High scalability and availability for server farms

  • Print
  • Feedback

Page 6 of 6

The example in Listing 5 implements a local cache and requires client affinity.

Listing 5. Local cached-based example requiring client affinity

class LocalHttpResponseCache extends LinkedHashMap<String, IHttpResponse> implements IHttpResponseCache {

   public synchronized IHttpResponse put(String key, IHttpResponse value) {
      return super.put(key, value);
   }

   public void remove(String key) {
      super.remove(key);
   }

   public synchronized IHttpResponse get(String key) {
      return super.get(key);
   }

   protected boolean removeEldestEntry(Entry<String, IHttpResponse> eldest) {
      return size() > 1000;   // cache up to 1000 entries
   }
}


class Server {

   public static void main(String[] args) throws Exception {
      RequestHandlerChain handlerChain = new RequestHandlerChain();
      handlerChain.addLast(new CacheInterceptor(new LocalHttpResponseCache()));
      handlerChain.addLast(new MyRequestHandler());

      HttpServer httpServer = new HttpServer(8080, handlerChain);
      httpServer.run();
   }
}

LVS supports affinity by enabling persistence -- remembering the last connection for a predefined period of time. It makes a particular client connect to the same real server for different TCP connections. But persistence doesn't really help in case of incoming dial-up links. If a dial-up link comes through a provider proxy, it can use different TCP connections within the same session.

Conclusion to Part 1

Infrastructures based on pure transport-level server load balancers are common. They are simple, flexible, and highly efficient, and they present no restrictions on the client side. Often such architectures are combined with distributed cache or session servers to handle application-level caching and session data issues. However, if the overhead caused by moving data from and to the cache or session servers grows, such architectures become increasingly inefficient. By implementing client affinity based on application-level server load balancer, you can avoid copying large datasets between servers. Read Server load balancing architectures, Part 2 for a discussion of application-level load balancing.

About the author

Gregor Roth, creator of the xLightweb HTTP library, works as a software architect at United Internet group, a leading European Internet service provider to which GMX, 1&1, and Web.de belong. His areas of interest include software and system architecture, enterprise architecture management, object-oriented design, distributed computing, and development methodologies.

Read more about Enterprise Java in JavaWorld's Enterprise Java section.

  • Print
  • Feedback

Resources

More