Recommended: Sing it, brah! 5 fabulous songs for developers
JW's Top 5
Optimize with a SATA RAID Storage Solution
Range of capacities as low as $1250 per TB. Ideal if you currently rely on servers/disks/JBODs
Page 5 of 6
The consistent-hashing approach leads to high scalability. Based on consistent hashing, the memcached client implements a failover strategy to support high availability. But if a daemon crashes, the cache data is lost. This
is minor problem, because cache data is redundant by definition.
A simple approach to make the memcached architecture fail-safe is to store the cache entry on a primary and a secondary cache server. If the primary cache server
goes down, the secondary server probably contains the entry. If not, the required (cached) data must be recovered from the
underlying data source.
Supporting application session data in a fail-safe way is more problematic. Application session data represents the state of a user-specific application session. Examples include the ID of a selected folder or the articles in a user's shopping cart. The application session data must be maintained across requests. In classic ("WEB 1.0") Web applications, such session data must be held on the server side. Storing it in the client by using cookies or hidden fields has two major weaknesses. It exposes internal session data, such as the price fields in shopping cart data, to attack on the client side, so you must address this security risk. And this approach works only for a small amount of data that's limited by the maximum size of the HTTP cookie header and the overhead of transferring the application session data to and from the client.
Similarly to the memcached architecture, session servers can be used to store the application session data on the server side. However, in contrast
to cached data, application session data is not redundant by definition. For this reason application session data is not removed
to make room for new data if the maximum memory size is reached. Caches are free to remove cache entries for memory-management
reasons at any time. Caching algorithms such as last recently used (LRU) remove cache entries if the maximum cache size is
reached.
If the session server crashes, the application session data is lost. In contrast to cached data, application session data is not recoverable in most cases. For this reason it is important that failover solutions support application session data in a fail-safe way.
The disadvantage of the cache and session server approach is that each request leads to an additional network call from the server to the cache or session server. In most cases call latency is not a problem because the cache or session server and the business servers are placed in the same, fast network segment. But latency can become problematic if the size of the data entries increases. To avoid moving large sets of data between the business server and cache/session servers again and again, requests of a dedicated client must always be forwarded to the same server. This means all of a user session's requests are handled by the same server instance.
In the case of caching, a local cache can be used instead of the distributed memcached server infrastructure. This approach, known as client affinity, does not require cache servers. Client affinity always directs the client to "its" particular server.
spymemcached, an improved memcached client for Java.
More