J2EE object-caching frameworks

Improve performance in Web portal applications by caching objects

Web applications are typically accessed by many concurrent users. Usually, the application's data is stored in a relational database or filesystem, and it takes time and costs overhead to access these data sources. Database-access bottlenecks can slow down or even crash the application if it receives too many simultaneous requests. Object caching is one technique that overcomes this problem. In this article, Srini Penchikala discusses a simple caching implementation framework he created to cache the lookup data objects in a Web portal project.

Object caching allows applications to share objects across requests and users, and coordinates the objects' life cycles across processes. By storing frequently accessed or expensive-to-create objects in memory, object caching eliminates the need to repeatedly create and load data. It avoids the expensive reacquisition of objects by not releasing the objects immediately after their use. Instead, the objects are stored in memory and reused for any subsequent client requests.

Here's how caching works: When the data is retrieved from the data source for the first time, it is temporarily stored in a memory buffer called a cache. When the same data must be accessed again, the object is fetched from the cache instead of the data source. The cached data is released from memory when it's no longer needed. To control when a specific object can be released from memory, a reasonable expiration time must be defined, after which, data stored in the object becomes invalid from a Web application's standpoint.

Now that we have covered the basics of how caching works, let's look at some of the well-known scenarios in a J2EE application that use object storage mechanisms similar to caching.

Conventional methods for object lookup such as a simple hashtable, JNDI (Java Naming and Directory Interface), or even EJB (Enterprise JavaBeans) provide a way to store an object in memory and perform the object lookup based on a key. But none of these methods provide any mechanism for either removing the object from memory when it's no longer needed or automatically creating the object when it's accessed after expiration. The HttpSession object (in the servlet package) also allows objects to be cached, but lacks the concepts of sharing, invalidation, per-object expiration, automatic loading, or spooling, which are the essential elements of a caching framework.

Object caching in Web portals

A portal must manage both user profiles and the objects available at the portal. Since most Web portals provide the single sign-on (SSO) feature, storing the user profile data is critical even if the user switches between various modules in the Web portal application. The user profiles should be securely stored in the cache so other Web users cannot access them. The objects can be aged out of the cache to free up space or the idle-time feature can remove objects not being accessed. This simplifies object management, as the application doesn't need to constantly monitor which objects are in demand at any given time. The "hot" objects are automatically available in the cache. Objects that are expensive to create or fetch can be written to a local disk and transparently retrieved as needed. Thus, object caching may be used for managing the user profile information and lookup data, such as company product information, that can be shared among multiple portal users.

Object caching benefits and liabilities

One of the main benefits of object caching is the significant improvement in application performance. In a multitiered application, data access is an expensive operation compared to other tasks. By keeping frequently accessed data and not releasing it after its first use, we can avoid the cost and time required for the data's reacquisition and release. Object caching results in improved Web application performance because of the following reasons:

  • It reduces number of trips to the database or other data sources, such as XML databases or ERP (enterprise resource planning) legacy systems
  • It avoids the cost of repeatedly recreating objects
  • It shares objects between threads in a process and between processes
  • It efficiently uses process resources

Scalability is another benefit of object caching. Since cached data is accessed across multiple sessions and Web applications, object caching can become a big part of a scalable Web application's design. Object caching helps avoid the cost of acquiring and releasing objects. It frees up valuable system hardware and software resources by distributing data across an enterprise rather than storing it in one centralized place such as the data tier. Locally stored data directly addresses latency, reduces operating costs, and eliminates bottlenecks. Caching facilitates management of Web applications by allowing them to scale at peak traffic times without the cost of additional servers. It can effectively smooth performance curves in a Web application for all-around better performance and resource allocation.

Object caching also includes a few disadvantages, such as memory size, for example. The cache may consume significant heap space in the application server. JVM memory size can become unacceptably huge if a lot of unused data is in the cache and not released from memory at regular intervals.

Another disadvantage is synchronization complexity. Depending on the kind of data, complexity increases because consistency between the cached data's state and the data source's original data must be ensured. Otherwise, the cached data can fall out of sync with the actual data, which leads to data inaccuracies.

Finally, changes to the cached data can vanish when the server crashes, another disadvantage. A synchronized cache could prevent this problem.

Object-caching use

Typical uses of object caching include storing HTML pages, database query results, or any information that can be stored as a Java object. Basically, any data that does not frequently change and requires a significant amount of time to return from the data source is a good candidate for caching. That includes most types of lookup data, code and description lists, and common search results with paging functionality (search results can be extracted from the data source once and stored in the cache for use when the user clicks on the results screen's paging link).

The HttpSession object in the Tomcat servlet container offers a good example of object caching. Tomcat uses an instance of Hashtable to store session objects and expire stale session objects using a background thread.

Middleware technologies such as EJB and CORBA allow the remote transfer of objects where the remote object is transferred between the client and the server. This type of access, also known as coarse-grained data access, minimizes the number of expensive remote method invocations. These data-transfer objects (also known as value objects) can be stored in the cache if the objects don't change frequently, which limits the number of times the servlet container must access the application server.

More examples of object-caching uses follow:

  • Enterprise JavaBeans: EJB entity beans represent database information in the middle tier, the application server. Once created, the entity beans are cached in the EJB container, which avoids expensive data retrieval (resource acquisition) from the database.
  • EJBHomeFactory cache: If client applications don't cache the stub somewhere, then remote method invocation can become much more expensive because every logical call to the server requires two remote calls: one to the naming service to fetch a stub and one to the actual server. This problem can be solved by creating an EJBHomeFactory class to cache the references to EJB Home interfaces and reusing them for the subsequent calls.
  • Web browsers: Most popular Web browsers such as Netscape and Internet Explorer cache frequently accessed Webpages. If a user accesses the same page, the browsers fetch the page's contents from the cache, thus avoiding the expensive retrieval of the contents from the Website. Timestamps determine how long to maintain the pages in the cache and when to evict them.
  • Data cache: Data stored in a RDBMS (relational database management system) is viewed as a resource that is sometimes hard to acquire. A correctly sized cache is a crucial component of a well-tuned database. Most databases incorporate a data cache of some sort. Oracle, for example, includes a shared global area that contains a cache of recently used database blocks and caches of compiled stored procedure code, parsed SQL statements, data dictionary information, and more.

How about data not fit for caching? Here's a list of data not recommended for caching:

  • Secure information that other users can access on a Website
  • Personal information, such as Social Security Number and credit card details
  • Business information that changes frequently and causes problems if not up-to-date and accurate
  • Session-specific data that may not be intended for access by other users

Caching algorithms

Resources stored in the cache require memory. If these resources are not used for a long time, holding on to them proves inefficient. Because the cache's capacity is limited, when the cache is full, we must purge some of the cache content before filling it again. An application can explicitly invalidate cached objects in three different ways: by associating a "time-to-live" (TTL) or "idle-time" with an object, or if the caching system's capacity has been reached (this is a configurable value), objects not recently used will be removed by the caching system.

A variety of cache expiration mechanisms can remove objects from a cache. These algorithms are based on criteria such as least frequently used (LFU), least recently used (LRU), most recently used (MRU), first in first out (FIFO), last access time, and object size. Each algorithm has advantages and disadvantages. LFU and LRU are simple, but they don't consider the object size. A size-based algorithm removes big objects (that require much memory), but the byte-hit rate will be low. It's important to consider all the Web application's requirements before deciding which cache algorithm to use for expiring cached objects.

Object caching in a J2EE application

In distributed system such as a J2EE application, two forms of caching can exist: client-side and server-side caching. Client-side caching is useful for saving the network bandwidth and the time required to repeatedly transmit server data to the client. On the other hand, server-side caching is useful when many client requests lead to repeated acquisitions of the same resource in the server. Server-side caching can be achieved in any tier, i.e., database, application server, servlet container, and Web server.

Server subsystems such as the servlet engine can improve server performance by pooling such items as request, response, and buffer objects. The servlet objects themselves can be stored in the cache. The group invalidation feature can then be used when application reload is required. All servlets and related objects within an application can be cleaned up with a single method call. Part or all of a response can be cached if it is applicable to more than one response, which can significantly improve response time. Similarly, in the data tier, caching can provide a significant performance improvement.

IronEye Cache (from IronGrid) provides the option of storing frequently requested SQL statements in a cache to minimize database calls and deliver commonly requested information quickly. Oracle provides object caching in all tiers. Oracle Web Cache sits in front of the application servers (Web servers), caching their content and providing that content to Web browsers that request it. Object Caching Service for Java provides caching for expensive or frequently used Java objects within Java programs. The Object Caching Service for Java automatically loads and updates objects as specified by the Java application. And finally, Oracle iCache Data Source provides data caching within the database server.

Object caching in a J2EE cluster

Object caching in a cluster is important because multiple JVMs run in a cluster, and keeping all the cluster members' cached data in sync is crucial. Since each servlet container has a cache manager instance in its JVM, data changes must be reflected in all caches to prevent stale reads. This can be achieved by using a message-driven bean (MDB) to notify all of the cache managers when to refresh the cached data. Many caching frameworks provide built-in cluster support for caching data.

Caching frameworks

Several object-caching frameworks (both open source and commercial implementations) provide distributed caching in servlet containers and application servers. A list of some of the currently available frameworks follows:

Open Source:

  • Java Caching System (JCS)
  • OSCache
  • Java Object Cache (JOCache)
  • Java Caching Service, an open source implementation of the JCache API (SourceForge.net)
  • SwarmCache
  • JBossCache
  • IronEye Cache

Commercial:

  • SpiritCache (from SpiritSoft)
  • Coherence (Tangosol)
  • ObjectCache (ObjectStore)
  • Object Caching Service for Java (Oracle)

If you are interested in reading more about these caching implementations, see Resources for links to all these frameworks.

Factors to consider in an object-caching framework

Look for the following factors in a caching framework:

  • Element grouping
  • Quick nested categorical removal
  • Data expiration
  • Fully configurable runtime parameters
  • Remote store recovery
  • Scheduled cache expiry
  • Security (Authentication or authorization should be completed before objects return from the cache. The information transmitted between caches should be encrypted.)

In this article, I compare three open source object-caching frameworks in terms of their installation and configuration complexity, flexibility, and future extensibility. The following table provides a brief description of each of these frameworks.

Table 1. Overview of selected frameworks

Caching frameworkVendorURLOverview
Java Caching SystemJakarta (part of Jakarta Turbine project)http://jakarta.apache.org/turbine/jcsJava Caching System (JCS) is a highly flexible and configurable solution to increase overall system performance by maintaining dynamic pools of frequently used objects. JCS goes beyond simply caching objects in memory. It provides several important features necessary for an enterprise-level caching system. JCS provides a framework with no point of failure, allowing for full session failover (in clustered environments), including session data across up to 256 servers. It also provides the flexibility to configure one or more data storage options like memory cache, disk cache, or caching the data on a remote machine.
OSCacheOpenSymphonyhttp://www.opensymphony.com/oscache/OSCache caches sections of JSP pages and binary content such as PDFs or images. It provides both fast in-memory caching and persistent on-disk caching depending on the caching requirements. It also supports object caching in a cluster environment.
JOCacheShiftOnehttp://jocache.sourceforge.netJOCache was developed to provide basic object caching. It supports clustered caching and can integrate with OR (object relational) models such as Hibernate.

Table 2 shows a comparative summary of these three frameworks.

Table 2. Comparison of caching frameworks

FeatureJCSOSCacheJOCache
JCache (JSR-107) compliant?YesYesYes
Installation and configuration complexitySimpleSimpleSimple
Supports cache regionsYesYesYes
Schedule cache expiryNoYesYes
Configuration filename (format)cache.ccf (plain text)oscache.properties (plain text)cache.properties
jar filejcs-1.0-dev.jaroscache-2.0.jarshiftone-cache.jar
Available cache algorithmsLRU, MRULRU, FIFO, UnlimitedFIFO, LRU, LFU
Clustering supportNoYesNo

Proposed Web portal caching framework

Before I started designing the object-caching framework for this article, I made a list of objectives that needed to be accomplished:

  • Faster access to frequently used data in the Web portal application.
  • Grouping of similar object types in the cache. The framework should invalidate a collection of objects with a single operation. It should be possible to associate objects in the cache so they can be managed as a group. In particular, invalidating a group of objects with one call should also be feasible.
  • Configurable cache management so I can modify cache parameters declaratively rather than programmatically.
  • Easy integration into any Web application with minimal or no changes in the Web application itself.
  • Seamless expiration mechanism. The hard part of expiration should be in the framework. Using the framework to expire objects should be as easy as possible.
  • The caching system shouldn't require the objects it stores to understand timestamps and know when they are out of date.
  • A flexible and extensible framework so I can switch to any third-party caching API in the future.
  • Flexibility to configure one or more data storage options, such as memory cache, disk cache, or caching the data on a remote machine.
  • Ability to generate statistics to monitor both caching effectiveness and application performance improvement as a result of data caching.
  • Ability to manage objects loaded from any source. The original source of the data being cached should have no restrictions.

The architecture diagram in Figure 1 shows the main components of the Web portal application using object caching.

Figure 1. Caching application architecture diagram. Click on thumbnail to view full-sized image.

Installation and configuration

Table 3 lists the hardware and software specifications of the machine used to test the caching frameworks.

Table 3. Hardware and software specifications

ProcessorHP Pavilion Pentium III with 800 MHz
Memory374 MB RAM
Hard Disk40 GB
Operating systemWindows 2000 Server with Service Pack 4
JDK version1.4.0_02
Tomcat version5.0.18
Tools usedAnt 1.6.1, log4j

Main elements of an object-caching framework

A typical caching framework contains components such as a CacheObject, CacheObjectKey, Cache, CacheManager, and a CacheLoader. I designed this article's caching framework so that a single CacheManager class encapsulates all the implementation details of caching, such as the access, creation, and destruction of objects in the cache from the client applications.

Java classes

Figure 2 illustrates the relationship of the Java classes in the caching framework.

Figure 2. Caching framework class diagram. Click on thumbnail to view full-size image.

Listed below are the Java classes that a Web application must know to use the caching functionality. These classes are located in the common.caching package in the source code provided with this article, which can be download from Resources.

ICacheManager

ICacheManager is the main interface (contract) that a client program uses to handle all the operations related to caching (i.e., storing, accessing, and releasing the data in the cache). The client program can be a JSP (JavaServer Pages) page, a Struts action class or a POJO (plain old Java object). This interface was created to hide all the caching implementation details from the client so if we needed to switch to a different third-party caching API in the future, we wouldn't need to change any of the client code.

BaseCacheManager

BaseCacheManager is the main class in the Web portal caching framework. It's the base implementation of ICacheManager. This class was created to centralize all the cache-related methods in one class. It's designed as a singleton to ensure one and only one instance of ICacheManager is created in the servlet container's JVM. In a clustered environment where multiple Web server/servlet container instances accept Web requests, a separate ICacheManager instance will be created in each JVM. If we switch to a different caching API later, this is the only class that must be modified to work with the new cache API. Also, if we switch to a JCache-compliant caching implementation, the cache manager should require minimal changes.

ICacheLoader

The ICacheLoader interface implements the actual data-access logic in the Web client. All client programs that need to use the caching mechanism must implement this interface. It has one method called loadCacheObject() and takes two input parameters, a string to specify the cache region name and an object to specify the cache key. This way, the cache manager knows which client program to use (to execute loadCacheObject()) to reload the object in the cache when the cached data expires after the specified time-to-live has elapsed.

It is good practice for the caching service to load objects automatically as needed rather than using the application to directly manage objects that use the cache. When an application directly manages objects, it uses the CacheAccess.put() method to insert objects into the cache. To take advantage of automatic loading, we instead use a CacheLoader object and implement its load method to put objects into the cache.

Note that the caching framework does not handle the creation of objects that need to be cached in a Web application, i.e., the data-access logic that retrieves the data from the data source is not coded in the caching classes. It relies on the client program to define the actual data-access logic. Technologies like Java Data Objects (JDO) are typically used to encapsulate the data-access logic in an enterprise Web application.

ICacheKey

The ICacheKey interface was created to hide the specific logic used to create a cache key. Sometimes the cache key may not be a simple string. It may be as complex as the combination of multiple objects, and getting these values from the data source involves not one, but several, lookup methods. In this case, ICacheKey can define all the complex logic involved in creating the cache key. This way, the cache-key creation logic is defined in a separate class. I wrote a sample class called TestCacheKey that implements this interface and overrides the getCacheKey() method to illustrate how to use this interface.

CacheRegion

A CacheRegion is defined as an organizational namespace for holding a collection of cache objects. Objects with similar characteristics (such as time-to-live and business use) should be cached in the same cache region so they can all be invalidated simultaneously if needed. To eliminate any synchronization issues that could cause poor performance, I used a separate instance of Cache for each cache region.

Configuration files

We configure all the caching parameters in a properties file. These parameters include caching information such as maximum number of objects that can be stored in memory, time-to-live (after which, the cached data is automatically released from memory), idle time (elapsed time since last access time), and memory cache name (caching algorithm such as LRU or MRU). Make sure the properties file is copied in a directory that's in the classpath.

Figure 3 shows the portal application's Web request flow in a sequence diagram.

Figure 3. Caching application sequence diagram. Click on thumbnail to view full-size image.

Testing setup

I created a build script using Ant to compile all Java source code for the object-caching framework. The Ant build script, named build.xml, is located in the WEB-INF/classes directory. I also wrote a JUnit test client to test different caching scenarios using the Web portal caching framework. This test script called CachingTestCase is located in the WEB-INF/classes/common/caching/test directory. Extract the sample code to a new Web application directory. To compile the Java code and run the JUnit test script, run the following commands:

  • ant common.compile (to compile all Java classes included in the caching framework)
  • ant common.runjunit (to run the JUnit test script; the test script uses log4j to display all of the output messages)

I wrote nine different JUnit test cases to test the criteria, such as performance, multiple cache regions, and cache expiration. Table 4 lists these test cases.

Table 4. Caching test cases (JUnit)

Test case numberTest case NameDescription
1testBasicCachingStore an object in cache and verify it's in the cache
2testLoadFromCacheStore an object in cache, sleep for a specified time less than the cached object's TTL, and retrieve object from cache to verify it's still in the cache.
3testCachePerformanceStore an object in cache and retrieve it 10 times to determine the performance gain in caching the object versus accessing it from the data source
4testStoreRetrieveClearStore three different objects in cache using three different keys, retrieve them using the keys, then remove the objects from cache and verify their removal.
5testMultipleCacheRegionsStore an object in two different cache regions, sleep for specified time periods, retrieve object from cache, and finally clear the cache regions.
6testCacheExpiryStore an object in cache, sleep for a specified time longer than the cached object's TTL, and try to retrieve the object in cache to verify it's already removed from the cache.
7testCachingUsingCustomCacheKeyStore an object in cache using a custom cache key and retrieve the object using that cache key.
8testLRUMemoryCacheAdd items to cache, retrieve them, and remove them. The item count is more than the memory cache's size, so items should be dumped based on the LRU policy.
9testMultipleStoreRetrieveClearStore, retrieve, and clear objects in cache in many iterations to compare the performance of each caching framework.

Sample code

The sample code used in this article can be downloaded from Resources. Extract the zip file's contents to the Tomcat Webapps directory. You will need the following jar files in the classpath to run the test scripts.

  1. jcs-1.0-dev.jar
  2. oscache-2.0.jar
  3. shiftone-cache.jar
  4. log4j-1.2.8.jar
  5. junit-3.8.1.jar
  6. commons-logging-1.0.3.jar
  7. commons-lang-1.0.1.jar

Conclusion

I found that OpenSymphony's OSCache framework performs better than the other two caching frameworks within the specified test parameters. OSCache was twice as fast as JCS and 5 to 10 percent faster than JOCache. JOCache came in second in terms of caching performance. Table 5 shows the response times for Test Case 9 (testMultipleStoreRetrieveClear).

Table 5. Results for Test Case 9

(All response times are in milliseconds)

Number of iterationsJCSOSCacheJOCache
2,0003,1042,1332,224
4,0008,1324,5566,029
6,00016,7749,71410,344

All three frameworks were easy to install and simple to integrate into the Web portal application. The caching framework I created was flexible to allow for switching the caching implementation with minimal changes in the client code.

The exact gain in performance varies significantly depending on the cost of creating or acquiring the object and the ratio of reads to writes. The more costly the object is to create and the more reads per write, the greater benefit the cache can provide.

Caching should be applied carefully when other means, such as optimizing the acquisition of the resource itself, cannot be further improved. Caching can introduce some complexity, complicating the maintenance of the overall solution. Therefore consider the trade-off between performance and complexity before applying caching.

Srini Penchikala presently works as information systems subject matter expert at Flagstar Bank. His IT career spans over 9 years with systems architecture, design, and development experience in client/server and Internet applications. He has been involved in designing and developing J2EE applications using Java and XML technologies since 1998. Penchikala holds a master's degree (Southern Illinois University, Edwardsville) and a bachelor's degree (Sri Venkateswara University, India) in engineering. His main interests are to research new J2EE technologies and frameworks related to designing Web portal applications. In his free time, Penchikala loves to travel and watch Detroit sports teams.

Learn more about this topic

Join the discussion
Be the first to comment on this article. Our Commenting Policies
See more