Friday, January 22, 2010

Caching—SOA on Steroids


One of the most oft heard critiques against SOA is that the overhead of SOAP/XML formats make it intrinsically low performing.  Yes, we all know that standards are often the result of consensus and aren’t always optimized, but SOA’s flexibility is needed to avoid recreating the monolithic “all-is done-here” view of the older development culture.  There is no doubt that SOA architectures can be affected by message transmission delays due to larger message sizes resulting from standardization and overheads associated with modular designs.
So, how to solve this conundrum?
 A common mistake is designing with the idea of avoiding these performance problems “from the start”. The outcome? Designs that are too monolithic, and that introduce inflexible interfaces with tightly coupled inter-process calls “in the interest of performance”.  Talk about throwing out the baby with the bathwater! A better approach is to design for flexibility, as the SOA gods intended, but to introduce the safety valve of caching throughout the system. Caching is the technique used to preserve recently used information as close as possible to the user so that it can be accessed more rapidly by a subsequent caller. Think of caching as a series of release valves needed to ensure the flow of services occurs as pressure-free as possible from beginning to end.
The idea is to design a system that allows as many caching points as possible. This does not mean you will actually utilize all the caching points. Ironically, there is a performance penalty to caching and you should therefore make certain to follow these tenets when it comes to its use:
·         Ensure that the caching logic operates asynchronously from the main execution path in order to avoid performance penalties due to the management of the cache.
·         Ensure you use the appropriate caching strategy. There are several different strategies that apply to specific data dynamics.  Should you clear the cache based on least-used, oldest, or most recently added criteria?  Will you implement automatic caching space recollection techniques (i.e. have a daemon periodically releasing cached elements in the background) or will you do so only when certain thresholds are crossed?
·         The rules for caching should be flexible and controllable from a centralized management console. It is imperative to always have real-time visibility of the various cache dynamics and to be able to react appropriately to correct any anomalies. Use the recommended cache flag field in the message headers to give you more controlled granularity of these dynamics.
·         Allow pre-loading of caching, or sufficient cache warm-up, prior to opening the applications to the full force of requests.
·         Always remember that blindly caching items is not a magic bullet. The success of caching depends significantly on the items you cache. If the items change very frequently, you will have to update the cache frequently as well and this overhead could upset any caching advantages..
Even though there are vendor products that provide single-image views of distributed systems caching, I recommend using them only for well-defined server clusters and not broadly for the entire system. You will be better off designing custom-made caching strategies for each particular service call and data element in your solution. There are several caching expiration strategies, such as time-based expiration, size-based expiration (expiring the oldest x% of cache entries when a certain cache threshold is reached), and change-triggered cache updates using a publish/subscribe mode.
Selecting the right expiration and refresh strategy is essential in ensuring the freshness of your data, high hit cache ratios (low cache ratios can make overall system performance suffer because of the overhead incurred in searching for a non-existing item in the cache), and avoidance of performance penalties due to cache management. Also, if you can preserve the cache in a non-volatile medium in order to permit rapid cache restore during a system start-up, then do so.  
Clearly, choosing what data to cache is essential. Data that changes rapidly or whose precision is critical should not be cached (e.g. available product inventory should only be cached if the amount of product in the inventory is larger than the amount of the largest possible order). You’ll need to assess how fresh data must be, for any situation. The optimum strategy must be determined carefully via trial-and-error. You can also apply analytical methods such as simulation (see later) to better estimate the impact of any potential change to either the characteristics of the data being cached, or the preferred caching approach.
Finally, I can’t emphasize enough the need for accurate caching monitoring via use of real-time dashboards.  These dashboards are a core component of the infrastructure needed to properly manage a complex SOA system. More on Managing SOA next.