J2EE clustering, Part 2

Migrate your application from a single machine to a cluster, the easy way

Within the J2EE framework, clusters provide an infrastructure for high availability (HA) and scalability. A cluster is a group of application servers that transparently run your J2EE application as if the group were a single entity. However, Web applications behave differently when they are clustered as they must share application objects with other cluster members through serialization. Moreover, you'll have to contend with the extra configuration and setup time.

To avoid major Web application rework and redesign, you should from the very beginning of your development process consider cluster-related programming issues, as well as critical setup and configuration decisions in order to support intelligent load balancing and failover. Finally, you will need to have a management strategy to handle failures.

Read the whole "J2EE clustering" series:

Building on the information in Part 1, I'll impart an applied understanding of clustering. Further, I'll examine clustering-related issues and their possible solutions, as well as the advantages and disadvantages of each choice. I'll also demonstrate programming guidelines for clustering. Finally, I'll show you how to prepare for outages. (Note that, due to licensing constraints, this article will not cover benchmarking.)

Set up your cluster

During cluster setup, you need to make important decisions. First, you have to choose a load balancing method. Second, you must decide how to support server affinity. Finally, you need to determine how you will deploy the server instances among clustered nodes.

Load balancing

You can choose between two generally recognized options for load balancing a cluster: DNS (Domain Name Service) round robin or hardware load balancers.

DNS round robin

DNS is the process by which a logical name (i.e., www.javaworld.com) is converted to an IP address. In DNS round-robin load balancing, a single logical name can return any IP address of the machines in a cluster.

DNS round-robin load balancing's advantages include:

  • Cheap and easy setup
  • Simplicity

Its disadvantages include:

  • No server affinity support. When a user receives an IP address, it is cached on the browser. Once the cache expires, the user makes another request for the IP address associated with a logical name. That second request can return the IP address of any other machine in the cluster, resulting in a lost session.
  • No HA support. Imagine a cluster of n servers. If one of those servers goes down, every nth request to the DNS server will go to the dead server.
  • Changes to the cluster take time to propagate to the rest of the Internet. Many corporations' and ISPs' DNS servers cache DNS lookups from their clients. Even if your DNS list of servers in the cluster could change dynamically, it would take time for the cached entries on other DNS servers to expire. For example, after a downed server is removed from your cluster's DNS list, AOL clients could still attempt to hit the downed server if AOL's DNS servers cached entries to the downed server. As a result, AOL users would not be able connect to your site even if other machines in the cluster were available.
  • No guarantee of equal client distribution across all servers in the cluster. If you don't configure cooperating DNS servers to support DNS load balancing, they could take only the first IP address returned from the initial lookup and use that for their client requests. Imagine a partner corporation with thousands of employees all pinned to a single server in your cluster!

Hardware load balancers

In contrast, a hardware load balancer (like F5's Big IP) solves most of these problems through virtual IP addressing. A load balancer presents to the world a single IP address for the cluster. The load balancer receives each request and rewrites headers to point to other machines in the cluster. If you remove any machine in the cluster, the changes take effect immediately.

Hardware load balancers' advantages include:

  • Server affinity when you're not using SSL
  • HA services (failover, monitoring, and so on)
  • Metrics (active sessions, response time, and so on)
  • Guaranteed equal client distribution across cluster

However, hardware load balancers exhibit disadvantages:

  • High cost -- 0,000 to 0,000, depending on features
  • Complex setup and configuration

Once you have picked your load balancing scheme, you must decide how your cluster will support server affinity.

Server affinity

Server affinity becomes a problem when using SSL without Web server proxies. (Server affinity directs a user to a particular server in the cluster for the duration of her session.) Hardware load balancers rely on cookies or URL readings to determine where requests are directed. If requests are SSL encrypted, hardware load balancers cannot read the header, cookie, or URL information. To solve the problem, you have two choices: Web server proxies or SSL accelerators.

Web server proxies

In this scenario, a hardware load balancer acts like a DNS load balancer for the Web server proxies, except that it acts through a single IP address. The Web servers decrypt SSL requests and pass them to the Web server plug-in (Web server proxy). Once the plug-in receives a decrypted request, it can parse the cookie or URL information and redirect the request to the application server where the user's session state resides.

With Web server proxies, the major advantages include:

  • Server affinity with SSL
  • No additional hardware required (only the hardware load balancer is needed)

The disadvantages are:

  • The hardware load balancer cannot use metrics to direct requests
  • Extensive SSL use puts an additional strain on the Web servers
  • Web server proxies need to support server affinity

If a large portion of the transactions your site processes must be secure, SSL accelerators, explained in the next section, can add flexibility to your cluster topology while supporting server affinity.

SSL accelerators

SSL accelerator networking hardware processes SSL requests to the cluster. It sits in front of the hardware load balancer, allowing the hardware load balancer to read decrypted information in cookies, headers, and URLs. The hardware load balancer can then use its own metrics to direct requests. With this setup, you can avoid Web proxies if you choose and still achieve server affinity through SSL.

With SSL accelerators, you benefit from:

  • A flexible topology layout (with Web proxies or without) that supports server affinity and SSL
  • Off-loaded SSL processing to the SSL accelerator, which increases scalability
  • Centralized SSL certificate management in a single box

The disadvantages comprise:

  • A high cost when you buy two accelerators to achieve HA
  • Added setup and configuration complexity

Once you have decided on your server affinity setup, you need to tactically place your application server instances throughout the cluster nodes.

Application server distribution

When distributing application server instances throughout your cluster, you must decide whether or not you want multiple application server instances on single nodes in the cluster, and determine the total number of nodes in your cluster.

The number of application server instances on a single node depends on the number of CPUs on the box, CPU utilization, and available memory. Consider multiple instances on a single box in any of three situations:

  • You have three or more CPUs not fully saturated under load
  • The instance heap size is set too large, causing garbage collection times to increase
  • The application is not I/O bound

Determining the optimal number of nodes in your cluster is an iterative process. First, profile and optimize the application. Second, use load-testing software to simulate your expected peak usage. Finally, add additional surplus nodes to handle the load when failures occur.

Ideally, it would be best to push out development releases to a staging cluster to catch clustering issues as they occur. Unfortunately, developers create most applications for a single machine, then migrate them to a clustered environment, a situation that can break the application.

Session-storage guidelines

To minimize the breakage, follow these general guidelines for application servers that use in-memory or database session persistence:

  • Make sure all objects and those they reference recursively in the HttpSession are serializable. As a rule of thumb, all objects should implement java.io.Serializable as a part of their canonical form.
  • Whenever you change an object's state in the HttpSession, call session.setAttribute(...) to flag the object as changed and save the changes to a backup server or database:

     
       AccountModel am = (AccountModel)session.getAttribute("account");
       am.setCreditCard(cc);
       //You need this so the AccountModel object on the backup receives the 
       //Credit card
       session.setAttribute("account",am);
    
  • The ServletContext is not serializable, so do not use it as an instance variable (unless it is marked as transient) for any object directly or indirectly stored within the HttpSession. Getting a reference to the ServletContext proves easier in a Servlet 2.3 container when the HttpSessionBindingEvent holds a reference to the ServletContext.
  • EJB remotes may not be serializable. When they are not serializable, you need to override the default serialization mechanism as follows (this class does not implement java.io.Serializable because AccountModel, its superclass, does):

    ...
    public class AccountWebImpl extends AccountModel 
       implements ModelUpdateListener, HttpSessionBindingListener { 
               transient private Account acctEjb;
               ...
       private void writeObject(ObjectOutputStream s) {
        
          try {
                                     s.defaultWriteObject();
             Handle acctHandle = acctEjb.getHandle();
             s.writeObject(acctHandle);
                          } catch (IOException ioe) {
                                     Debug.print(ioe);
             throw new GeneralFailureException(ioe);
                          } catch (RemoteException re) {
                               throw new GeneralFailureException(re);
                          }
       }
               private void readObject(ObjectInputStream s) {
                          try {
                     s.defaultReadObject();
             Handle acctHandle = (Handle)s.readObject()
                      Object ref = acctHandle.getEJBObject();
                       acctEjb = (Account) PortableRemoteObject.narrow(ref,Account.class);
                          } catch (ClassNotFoundException cnfe) {
                                     throw new GeneralFailureException(cnfe);
                          } catch (RemoteException re) {
                                     throw new GeneralFailureException(re);
                          } catch (IOException ioe) {
                                     Debug.print(ioe);
             throw new GeneralFailureException(ioe);
                          }
       }
    
  • HttpSessionBindingListener's valueBound(HttpSessionBindingEvent event) method is called after the session is restored from disk and after every call to HttpSession's setAttribute(...) method. valueBound(HttpSessionBindingEvent event), however, is not called during failover.

In-memory session state replication

In-memory session state replication proves more complicated than database persistence because individual objects in the HttpSession are serialized to a backup server as they change. With database session persistence, the objects in the session are serialized together when any one of them changes. As a side effect, in-memory session state replication clones all HttpSession objects stored directly under a session key. This has virtually no effect if each object stored under a session key is independent of the other objects stored under different session keys. However, if the objects stored under session keys are highly dependent on other objects within the HttpSession, copies will be made on the backup machine. After failing over, your application will continue to run but some features may not work -- a shopping cart may refuse to accept more items, for instance. The problem stems from different parts (updating the shopping cart and displaying the shopping cart) of your application referring to their own copies of the shopping cart object. The class responsible for updating the cart is making changes to its copy of the shopping cart while the JSPs attempt to display their copy of the shopping cart. In a single-server environment, this problem would not arise because both parts of your application would point to the same shopping cart.

Here is an example of indirectly copying objects:

1 2 Page
Join the discussion
Be the first to comment on this article. Our Commenting Policies
See more