|
|
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
By distributing application load among multiple redundant servers, clustering maintains performance and keeps users blissfully unaware of single-server failures. In this Open source Java projects installment, Steven Haines introduces Terracotta, an enterprise Java clustering solution. Find out why Terracotta, unlike traditional clustering solutions, doesn't make you sacrifice an iota of reliability in the name of performance. Level: Intermediate
Terracotta is an open source solution for enterprise Java clustering that boasts near linear scalability and 100 percent reliability. Terracotta supports standard HTTP session clustering in Apache Tomcat and Oracle WebLogic, as well as open source projects such as Struts, Spring, and Hibernate. I'll start by explaining what's unique about Terracotta clustering. Then, after taking you through installation, I'll show you how to configure Terracotta to cluster a sample Web application that uses HTTP sessions, and how to deploy Terracotta clustering in a production environment.
Clustering solves two fundamental problems for mission-critical applications: scalability and fail-over. Scalability measures how well an application can maintain its performance under increasing load. Clustering addresses scalability by letting you distribute the load among several physical servers or server instances. Theoretically perfect scalability is linear: as new servers are added to a cluster, each server adds support for a constant number of users. For example, if one server can support 500 users, then two servers can support 1,000 users and three servers can support 1,500.
The concepts of performance and scalability are often interwoven, but they're distinct. Performance measures whether an application can respond to a request within its defined service-level agreement (SLA). Scalability measures how well an application can maintain its performance under increasing load. Horizontal clustering distributes load across multiple machines; you can think of it as "scaling out." You can think of vertical clustering as "scaling up," meaning that load is distributed to multiple application server instances running on the same physical server. Vertical clustering can sometimes better utilize all of a server's resources than a single JVM instance.
The other side of clustering is fail-over in the event of server failure. Successful fail-over makes outages transparent to users while maintaining their state within the application. It requires a strategy for replicating a user's state to one or more secondary servers and then, if the first server goes down, redirecting all subsequent requests to the secondary server(s). Deciding how, when, and where to send the data are fundamental challenges to implementing this strategy effectively.
Serialization -- the process of converting a Java object to a binary object -- is the traditional approach to how the data is sent. Application servers typically identify whether a change has been made to stateful objects, serialize those objects, and send them to the replicated servers. This strategy is inefficient because the serialization process is "all or nothing." In many applications, such as those powered by portals, stateful information can be measured in megabytes. Even if a user changes only a single byte, such as changing a preference from "true" to "false," the application server must construct a serialized version of a potentially multi-megabyte object and send all of that data across the network to its replicated servers. This approach's inefficiency hinders linear scalability.