Friday, December 25, 2009

The Data Visibility Exceptions

The Data Sentinel is not unlike the grumpy bureaucrat processing your driver’s license application forms. After ensuring that you comply with what’s sure to be a ridiculously complicated list of required documents, it isolates you from directly accessing the files in the back.
While you, the applicant, the supplicant, cannot go around the counter and check the content of your files directly (not legally, anyway), the DMV supervisor in the back office is able to directly access any of the office files. After all, the supervisor is authorized to bypass the system processes intended to limit the direct access to the data.  Direct supervisory access to data is one of the exceptions to the data visibility constrains mentioned earlier. 
Next is the case of ETLs (Extract Transform Loads) of large sets of data as well as its reporting. These cases require batch level access to data in order to process or convert millions of data records and can wreck performance if carelessly implemented. Reporting jobs should ideally run against offline replicated databases; not the on-line production data bases. Better yet is to plan for a proper Data Warehousing strategy that allows you to run business intelligence processes independently of the main Operational Data Store (ODS). Never the less, on occasion, you will need to run summary reports or data-intensive real-time processes against the production database. When the report tool is allowed to access the database directly, bypassing the service layer provided by the Data Sentinel, you will need to ensure this access is well-behaved and that it runs as a low priority process and under restricted user privileges. The same control is required for the ETL processes.  Operationally, you should always schedule batch-intensive processes for off-peak times such as nightly runs.
A third potential cause for exception to data visibility is implied by the use of off-the-shelf transaction monitors, requiring direct access to the databases in order to implement the ACID logic discussed earlier.
A fourth exception is demanded by the need to execute large data matching processes. If there is an interactive need to run a process against a large data base set with matching keys in a separate data base (“for all customers with sales greater than an $X amount, apply a promotion flag equal to the percentage corresponding to the customer’s geographic location in the promotion database”), then it makes no sense trying to implement each step via discrete services. Such an approach would be extremely contrived and inefficient. Instead, use of a Table-Joiner super-service will be required. More on that next.

Friday, December 18, 2009

Transactional Services

Related to the issue of Session-Keeping is how to ensure that complex business transactions take place in order to meet the following so-called ACID properties:
·         Be Atomic. The transaction is indivisible and it either happens or does not.
·         Be Consistent. When the transaction is completed all data changes should be accountable. For example, if we are subtracting money from one bank account and transferring it to another account, the transaction should guarantee that the money added to the new account has been subtracted from the original account.
·         Act in Isolation.  I like to call this the sausage-making rule. No one should be able to see what’s going on during the execution of a transaction. No other transaction should be able to find the backend data in a half-done state. Isolation implies serialization of transactions.
·         Be Durable.  When the transaction is done, the changes are there and they should not disappear. Having a transaction against a cache that fails to update the data base is an example of non-durability.
Since we are dealing with a distributed processing environment based on services, the main method used to ensure that ACID is met is a process known as Two-Phase Commit. Essentially, a Two-Phase Commit establishes a transaction bracket prior to executing changes, performs the changes, and after ascertaining that all needed changes have occurred a commit to finalize the changes by closing the transaction bracket. If during the process, the system is unable to perform one or more of the necessary changes, a rollback process will occur to undo any prior partial transaction change. This is needed to ensure that, if unsuccessful, the transaction will, at the very least, return the system to its original state. This process is so common-sense that, in fact, all this business of transaction processing has been standardized. The OpenGroup[1] consortium defines transactional standards, and in particular the so-called X/Open protocol and XA compliance standards.
However, transactional flows under SOA tend to be non-trivial. This is because a transaction flow requires the keeping of session states throughout the life of the transaction and, as earlier discussed, state-keeping is to SOA what Kryptonite is to Superman.  Say you want to transfer money from one checking account to another. You access the service Subtract X from Account; then you create another service, Add X to Account Y. This simple example puts the burden of transactional integrity on the client of the services. The client should ensure that the Add to Account service has succeeded before subtracting the money from the original account. An approach like this breeds as much complexity as a cat tangling a ball of yarn, and it should be avoided at all costs.  Far simpler is to create a service, Transfer X from Account X to Account Y, and then let the service implementation worry about ensuring the integrity of the operation. The question then is what type of implementation is most appropriate.
While SOA based transactional standards are in place [2] , actual vendor-based implementations supporting these standards don’t yet exist in the way mature Database XA compliant implementations exist. In general, you’d be better off leveraging the backend transaction facilities provided by RDBMS vendors or by off-the-shelf transaction monitors such as CICS, MTS, or Tuxedo. All in all, it’s probably best to encapsulate these off-the-shelf transaction services behind a very coarse meta-service whenever possible, rather than attempting to re-implement the ACID support via Two-Phase Commit at the services layer. 
It should be noted that what I am essentially recommending is an exception to the encapsulating databases via a Data Sentinel when it comes down to implementing transactional services. The reasoning behind this is that integrating with off-the-shelf transactional services will likely require direct database access in order to leverage the XA capabilities of the database vendor. 
As more actual off-the-shelf transactional service solutions for SOA appear in the future, we can then remove the exception.
More on the Data Visibility Exceptions will follow. . .


[1] http://www.opengroup.org/
[2] http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=ws-tx

Thursday, December 10, 2009

State Keeping/State Avoidance


Managing SOA complexity brings up the question of session state. By ’state’ I mean all the required information that must be maintained and stored across the series of interactions needed to complete a full business exchange. Maintaining the state of a service interaction implies remembering at what stage are the conversing partners and the working data in effect. It will often be at your discretion designing services to either depend more or less on the use of state information. At other times the problem at hand will force a specific avenue. In either case, you should remember this simple formula: State-Keeping = Complexity.
Maintaining state might be inescapable in  automated orchestration logic, but it comes with a cost. State-Keeping constrains the options for maintaining high availability and indirectly may increase SOA’s fragility by making it more difficult to add redundant components to the environment. With redundant components you must ensure that messages flowing through the system maintain their state, regardless of the server resources used. Relying on session states, while also allowing flexible   service flows, is hard to do. It’s done, yes, but the price you will have to pay is an increase complexity and performance penalties related to the need to propagate the state of a particular interaction across several nodes. Therefore, a key SOA tenet is that you should use sessionless flows whenever possible. In other words, every request should ideally be atomic and serviceable regardless of the occurrence of previous requests.
Do you want to know the name of an employee with a given social security number? No problem. As a part of the request pass the social security number, and receive the name. If you next want the employee’s address, you can pass either the social security number or the name as part of the request. While atomic, sessionless, requests such as these do impose a requirement that the client maintains the state of the interaction and holds the information elements related to the employee, this approach does simplify the design of systems using server clusters.
Still, while the preferred tenet is to avoid session keys. On occasion, it becomes impossible for the client to keep the state, forcing the server to assume this responsibility. In this case, the approach is to use a uniquely generated “session-id” whereby the server “remembers” the employee information (the state).  You will have to ensure the session key and associated state data is accessible to all servers in a loosely-coupled cluster, making your system design more complicated.
For an example of keeping a session-based state, consider an air booking process where the client is reserving a pair of seats. The server will temporarily decrease the inventory for the flight. For the duration of the transaction the server will give a unique “reservation id” to the client so that any ongoing requests from the client can be associated with the holding of these seats.   Clearly, such a process will need to include timeout logic to eventually release the two seats in the event the final booking does not take place before a predetermined amount of time.
This discussion leads to another tenet: maintaining state, either in the client or in the server, along the lines mentioned is ultimately acceptable. Keeping the state inside the intermediate nodes? Not so much.  Why? An intermediate component should not have control in timing-out a resource that’s being held in the server. If it did, it would be disrupting the server’s ability to maintain integrity in its environment. Also, an intermediate component will not have full awareness of the business semantics of the service request/response.  Relying on an intermediate component to preserve state is like expecting your mail carrier to remind you that your cable bill is due for payment on the 20th of each month. He might do it, yes, but the moment you forget to tip him during the holidays, he just might “forget”!
Ironically, many of today’s vendors offer solutions that encourage the processing of business logic in their intermediate infrastructure products, encouraging you to maintain state in these middleware components. They do so because enabling middleware is an area that does not require them to be aware of your applications, and thus is the easiest area for them to offer you a “value-add service” in a productized, commoditized fashion. You should resist the melodious chant of these mermaids and refrain from using their tempting extras services. If not, you may find yourself stuck with an inflexible design and with a dependency on specific vendor architecture to boot.
My advice is to avoid these vendor-enabled approaches. There is much that can get complicated with the maintenance of state, especially when the business process requires transactional integrity, referential integrity, and security (and most business processes do). The moment you give up this tenet and maintain session state inside the SOA middleware as opposed to the extreme end represented by the Client and the Server, you will be ensuring years of added complexity in the evolution of your SOA system.

Friday, December 4, 2009

Taming the SOA Complexities


Remember when I used to say, “Architect for Flexibility; Engineer for Performance”? Well, this is where we begin to worry about engineering for performance. This section, together with the following SOA Foundation section represents the Level III architecture phase. Here we endeavor to solve the practical challenges associated with SOA architectures via the application of pragmatic development and engineering principles.


On the face of it, I wish SOA were as smooth as ice cream. However, I regret to inform you that it is anything but.  In truth, SOA is not a panacea, and its use requires a fair dose of adult supervision. SOA is about flexibility, but flexibility also opens up the different ways one can screw up (remember when you were in college and no longer had to follow a curfew?).  Best practices should be followed when designing a system around SOA, but there are also some principles that may be counter-intuitive to the “normal” way of doing architecture. So, let me wear the proverbial devil’s advocate hat and give you a list from “The Proverbial Almanac of SOA Grievances & Other Such Things Thusly Worrisome & Utterly Confounding”:
·         SOA is inherently complex. Flexibility has its price. By their nature, distributed environments have more “moving” pieces; thereby increasing their overall complexity.
·         SOA can be very fragile. SOA has more moving parts, leading to augmented component interdependencies.  A loosely coupled system has potentially more points of failure.
·         It’s intrinsically inefficient. In SOA, computer optimization is not the goal. The goal is to more closely mirror actual business processes. The pursuit of this worthy objective comes at the price of SOA having to “squander” computational resources. 
The way to deal with SOA’s intrinsic fragility and inefficiency is by increasing its robustness.  Unfortunately, increasing robustness entails inclusion of fault-tolerant designs that are inherently more complex.  Why? Robustness implies deployment of redundant elements. All this runs counter to platonic design principles, and it runs counter to the way the Level I architecture is usually defined. There’s a natural tension because high-level architectures tend to be highly optimized, generic, and abstract, referencing only the minimum detail necessary to make the system operate. That is, high level architectures are usually highly idealized—nothing wrong with it. Striving for an imperfect high level architecture is something only Homer Simpson would do. But perfection is not a reasonable design goal when it comes to practical SOA implementations.  In fact, perfection is not a reasonable design goal when it comes to anything.
Consider how Mother Nature operates.  Evolution’s undirected changes often result in non-optimal designs. Nature solves the problem by “favoring” a certain amount of redundancy to better respond to sudden changes and to better ensure the survival of the organism. “Perfect” designs are not very robust. A single layered roof, for example, will fail catastrophically if a single tile fails. A roof constructed with overlapping tiles can better withstand the failure of a single tile. 
A second reason SOA is more complex is explained by the “complexity rule” I covered earlier: the more simplicity you want to expose, the more complex the underlying system has to be. Primitive technology solutions tend to be difficult to use, even if they are easier to implement.  The inherent complexity of the problem they try to solve is more exposed to the user. If you don’t believe me consider the following instructions from an old Model T User Manual from Ford:
 “How are Spark and Throttle Levers Used? Answer: under the steering wheel are two small levers. The right- hand (throttle) lever controls the amount of mixture (gasoline and air) which goes into the engine. When the engine is in operation, the farther this lever is moved downward toward the driver (referred to as “opening the throttle”) the faster the engine runs and the greater the power furnished. The left-hand lever controls the spark, which explodes the gas in the cylinders of the engine.”
Well, you get the idea. SOA is all about simplifying system user interactions and about mirroring business processes.  These goals force greater complexity upon SOA. There is no way around this law.
There are myriad considerations to take into account when designing a services-oriented system.  Based on my experience I have come up with a list covering some of the specific key techniques I have found effective in taming the inherent SOA complexities.  The techniques relate to the following areas that I will be covering next:
State-Keeping/State Avoidance. Figuring out under what circumstances state should be kept has a direct relevance in determining the ultimate flexibility of the system.
Mapping & Transformation. Even if the ideal is to deploy as homogenous a system as possible, the reality is that we will eventually need to handle process and data transformations in order to couple diverse systems. This brings up the question as to where is best to perform such transformations.
Direct Access Data Exceptions. As you may recall from my earlier discussion on the Data Sentinel, ideally all data would be brokered by an insulating services layer. In practice, there are cases where data must be accessed directly. The question is how to handle these exceptions.
 Handling Bulk Data. SOA is ideal for exchanging discrete data elements. The question is how to handle situations requiring the access, processing and delivery of large amounts of data.
Handling Transactional Services.  Formalized transaction management imposes a number of requirements to ensure transactions have integrity and coherence. Matching a transaction-based environment to SOA is not obvious.
Caching. Yes, there’s a potential for SOA to exhibit a slower performance than grandma driving her large 8-cylinder car on a Sunday afternoon. The answer to tame this particular demon is to apply caching extensively and judiciously.
All the above techniques relate to the actual operational effectiveness of SOA. Later on I will also cover the various considerations related to how to manage the SOA operations.
Let’s begin . . .

Friday, November 27, 2009

Interlude Two: Y2K and the Fuzzy Nature of Success

-->
Before the start of the millennium the world was abuzz with concern, as it held its collective breath fearing that the much touted Y2K bug would usher in an end to civilization.  Airplanes could fall from the sky, Nuclear Plants could melt, and financial assets could disappear.
Mind you, the so-called “Millennium Bug” was not actually a bug. Rather, it was a side effect of legacy applications written during a time when the cost to store one byte of data was about ten million times higher than it is today.  Storing only the last two digits of the year did in fact save significant money during those early days of super-expensive disk storage. The real issue was that legacy applications over-extended their welcome, as they are prone to do, and the need to prevent a devastating impact from Y2K became a critical imperative[1]
With a total worldwide estimated expense of $200B, the industry successfully confronted both the hype and the challenge of this problem. In the end, the only explosions seen were from the fireworks of the beautiful millennial ceremonies held around the world. January 1, 2000 came and went with barely a hiccup.
Sadly, there soon emerged the narrative that IT people had duped everyone with their “Y2K fear-mongering” and that the whole effort had been a waste of money. During a CNN interview, a congressman opposing a new budget request stated, “Let’s not have another Y2K project where we spent so much money on it and then . . . nothing happened anyway!”
All this got me thinking about the fuzzy nature of success. For the most part, failure is easy to recognize even if politicians and some Wall Street insiders are masters at concealing it (mortgage derivatives anyone?). Success, on the other hand, is often muffled or belittled.  Even clearly successful endeavors such as the moon landing had its share of detractors: “The project was over-budget”, “Astronauts died in the process”, “The whole mission was a fool’s-errand”, “That money could have been used to solve world-hunger”, “The whole thing was a hoax”, et cetera, et cetera. 
Yes, we frequently hear motivational arguments about how failure can be a good thing, how we can learn from it, and how we should all be allowed to fail, and so on, and so forth.  Truth be told, no one possessing any degree of rationality seeks to fail, and if you fail one too many times, chances are that you will suffer the consequences.  Setting all philosophizing side, I think we can all agree that success is preferable to failure.
But what exactly constitutes success?
A baseball batter is deemed wildly successful when he is able to hit the ball 40% of the time. What is the right percentage for a transformation project? Certainly not 40%.  Neither is 100%. A bigger problem is the expectation of perfection. One of my assigned goals in a previous job was to make sure that all technology worked well all of the time—this on a shoestring budget and limited resources to boot.  Yet, in the real world, a project that succeeds too well in all of its dimensions is either a chimera or something that was not ambitious enough to begin with.
Alas, I have witnessed projects that delivered and even exceeded their key goals, but because they failed to meet one hundred percent of the original expectations, they ended up being perceived as failures. Projects that truly set out to accomplish something must be measured against their key deliverables, even if they might miss a few non-essential objectives.  This is especially true with large and complex transformation initiatives.
As far as that congressman was concerned, the Y2K intervention was a failure because he looked only at the cost involved and did not understand that “nothing bad should happen” was actually the criteria for success. The hundreds of billions of dollars spent to avert the Y2K bug was money well spent, precisely because, in the end, “nothing happened”. The Y2K catastrophe was avoided; plus there’s anecdotal evidence that the Y2K remediation efforts also spear-headed beneficial transformation makeovers.
Many books have been written about failure but not many about success.  In my experience, success (actual or imagined) can take one of the following forms:
·         Bombastic.  When Charles Lindbergh landed in France after his solo trans-Atlantic flight, the feat was celebrated by a million people in Paris and between four and five million people when he paraded in New York City.
·         Celebrated. Naturally this is the kind of success that most of us hope for after a project is completed.
·         Contented.  Not all successes have to be celebrated. In fact, it is rather annoying when someone constantly highlights his or her accomplishments.  Contented success that takes place day by day has the value of making you feel inwardly warm and cozy knowing you have done your job right.
·         Unappreciated. As with Y2K, I have also seen projects deprived of recognition simply because they provided success without drama. It is ironic that projects that first fail and then eventually get fixed (usually at added expense and time) are the ones that tend to get credited and celebrated the most (e.g. Hubble Space Telescope). However, projects that deliver their promise right off the bat; especially when the promise is one of risk aversion tend to receive little or no recognition. 
·         Fake. Not real success, but I added it to this list for completeness sake. We all remember that landing on an aircraft carrier with a “Mission Accomplished” banner, and who actually thinks the Kardashians are talented?
Yes, no one wants to be unappreciated, but hopefully you will not make claims for fake successes. It is fine to let everyone know of your actual accomplishments, but do so in a manner that does not cross the bragging line. To achieve success worthy of celebration, it is best is to be objective about the essential metrics right from the start.  When creating a new project plan, make sure to include a Success Criteria section were you list the criteria that must be met in order to qualify it for accolades. You may need to educate everyone on the paradigm that while the success criteria does include the deliverables you and your sponsors deem mandatory, it does not necessarily reflect the totality of desired deliverables.  After all, Kennedy’s goal for the Moon Shot was a very simple "land a man on the Moon and return him safely to Earth by the end of the decade”.  While the original five billion dollar budget was easily exceeded, and failures occurred along the way, the moon landing mission was indeed a Bombastic success.
If the objective of a project is to deliver something tangible, you can at the very least make a compelling case for its success by showing the working deliverable. However, success will be more difficult to define for those projects developed to avoid risks. In these cases your criteria will be phrased around the concept of “avoidance”, and since “avoidance” is fairly open-ended (e.g. “Avoid hackers breaking into our Credit Card Data Base”), you will need to refine the parameters. For instance: How long will you keep the data safe?  At what cost? What specific kinds of intrusion will be prevented? The more precise and objective you are with metrics, the better.
True success should always be assessed fairly and realistically and then celebrated. It’s worth remembering that lack of recognition is one of the main reasons seasoned professionals search for greener pastures. Every accomplishment should be humbly recognized and used as a foundation for the next step up the ladder.  And for every successful step we should give contented thanks.
With that, I wish you all a very successful 2015!



[1] If moving from the year 1999 to the year 2000 was tough, can you imagine the pain the Chinese had to undergo to move from the year of the Rabbit to the year of the Dragon? (No more millennium jokes, I promise)

Friday, November 20, 2009

The Data Sentinel


Data is what we put into the system and information is what we expect to get out of it (actually, there’s an epistemological argument that what we really crave is knowledge. For now, however, I’ll use the term ‘information’ to refer to the system output). Data is the dough; Information the cake. When we seek information, we want it to taste good, to be accurate, relevant, current, and understandable. Data is another matter. Data must be acquired and stored in whatever is best from a utilitarian perspective. Data can be anything. This explains why two digits were used to store the date years in the pre-millennium system, leading to the big Y2K brouhaha (more on this later).  Also, data is not always flat and homogeneous. It can have a hierarchical structure and come from multiple sources. In fact, data is whatever we choose to call the source of our information.
Google has reputedly hundreds of thousands of servers with Petabytes of data (1 Petabyte = 1,024 Terabytes), which you and I can access in a manner of milliseconds by typing free context searches. For many, a response from Google represents information, but to others this output is data to be used in the cooking of new information. As a matter of fact, one of the most exciting areas of research today is the emergence of Collective Intelligence via the mining of free text information on the web. Or consider the very promising WolframAlpha knowledge engine effort (wolframalpha.com) which very ambitiously taps a variety of databases to provide consolidated knowledge to users. There are still other mechanisms to provide information that rely on the human element as a source of data. Sites such as Mahalo.com or Chacha.com actually use carbon-based intelligent life forms to respond to questions.
Data can be stored in people’s neurons, spreadsheets, 3 x 5 index cards, papyrus scrolls, punched cards, magnetic media, optical disk or futuristic quantum storage. The point is that the user doesn’t care how the data is stored or how it is structured. In the end, Schemas, SQL, Rows, Columns, Indexes, Tables, are the ways we IT people store and manage data for our own convenience. But as long as the user can access data in a reliable, prompt, and comprehensive fashion, she could care less whether the data comes from a super-sophisticated object oriented data base or from a tattered printed copy of the World Almanac.
How should data be accessed then? I don’t recommend handling data in an explicit manner the way RDBMs vendors tell you to handle it. Data is at the core of the enterprise, but it does not have to be a “visible” core. You don’t visualize data with SQL. Instead, I suggest that you handle all access to data in an abstract way. You visualize data with services and this brings up the need via a Data Sentinel Layer. This layer should be, you guessed it, an SOA enabled component providing data accesses and maintenance services.
To put it simply, the Data Sentinel is the gatekeeper and abstraction layer for data. Nothing goes into the data storages without the Sentinel first passing it through; nothing gets out without the Sentinel allowing it. Furthermore, the Sentinel allows decoupling of how the data is ultimately stored from the way the data is perceived to be stored. Depending upon your needs, you may choose consolidated data storages or, alternatively, you may choose to follow a federated approach to heterogeneous data. It doesn’t matter. The Data Sentinel is responsible for presenting a common SOA fa├žade to the outside world. 
Clearly, a key tenet should be to not allow willy-nilly access to data by bypassing the Sentinel. You should not allow applications or services (whether atomic or composite) to fire their own SQL statements against a data base. If you want to maintain the integrity of your SOA design, make sure to access data via the data abstraction services provided by the Sentinel services only.
Then again, this being a world filled with frailty, there are three exceptions where you will have to allow SOA entities to bypass the abstraction layer provided by the Sentinel. Every castle has secret passageways. I will cover the situations where exceptions may apply later: Security/Monitoring, Batch/Reporting, and the Data Joiner Pattern.
Obviously, data abstraction requires attention to performance, data persistence, and data integrity aspects. Thankfully, there are off-the-shelf tools to help facilitate this abstraction and the implementation of a Sentinel layer, such as Object-Relational mapping, automated data replication, and data caching products (e.g. Hibernate). Whether you choose to use an off-the-shelf tool or to write your own will depend upon your needs, but the use of those tools is not always sufficient to implement a proper Sentinel.  Object-Relational mapping or use of Stored Procedures, for example, are means to more easily map data access into SOA-like services, but you still need to ensure that the interfaces comply with the SOA interface criteria covered earlier. In the end, the use of a Data Sentinel Layer is a case of applying abstraction techniques to deal with the challenges of an SOA-based system, but one that also demands engineering work in order to deploy the Sentinel services in front of the Data Bases/Sources. There are additional techniques and considerations that also apply, and these will be discussed later on.

Friday, November 13, 2009

ESB and the SOA Fabric


A number of new needs have emerged with the advent of SOA. First of all, there was no standard way for an application to construct and deliver a service call.  Secondly, there was no standard way to ensure the service would be delivered.  Thirdly, it was not clear how this SOA environment could be managed and operated effectively. Fourthly . . .  well, you get the idea; the list goes on and on.
SOA demands the existence of an enabling infrastructure layer known as middleware. Middleware provides all necessary services, independent of the underlying technical infrastructure.  To satisfy this need, vendors began to define SOA architectures around a relatively abstract concept: the Enterprise Service Bus, or ESB.   Now, there has never been disagreement about the need to have a foundational layer to support common SOA functions—an enterprise bus of sorts. The problem is that each vendor took it upon himself to define the specific capabilities and mechanisms of their proprietary ESB, oftentimes by repackaging preexisting products and rebranding them to better fit their sales strategies.
As a result, depending on the vendor, the concept of Enterprise Service Bus encompasses an amalgamation of integration and transformation technologies that enable the cooperative work of any number of environments: Service Location, Service Invocation, Service Routing, Security, Mapping, Asynchronous and Event Driven Messaging, Service Orchestration, Testing Tools, Pattern Libraries, Monitoring and Management, etc. Unfortunately, when viewed as an all-or-nothing proposition, ESB’s broad and fuzzy scope tends to make vendor offerings somewhat complex and potentially expensive.
The term ESB is now so generic and undefined that you should be careful not to get entrapped into buying a cornucopia of vendor products that are not going to be needed for your specific SOA environment.  ESBs resemble more a Swiss army knife, with its many accessories, of which only a few will ever be used. Don’t be deceived; vendors will naturally try to sell you the complete superhighway, including rest stops, gas stations and the paint for the road signs, when all you really need is a quaint country road. You can be choosy and build your base SOA foundation gradually.  Because of this, I am willfully avoiding use of the term “Enterprise Service Bus”, preferring instead to use the more neutral term, “SOA Fabric.”
Of all the bells and whistles provided by ESB vendors (data transformation, dynamic service location, etc.), the one key function the SOA Fabric should deliver is ensuring that the services and service delivery mechanisms are abstracted from the SOA clients.
A salient feature that vendors tell us ESBs are good for is their ability to integrate heterogeneous environments. However, if you think about it, since you are going through the process of transforming the technology in your company (the topic of this writings after all!), you should really strive to introduce a standard protocol and eliminate as many of legacy protocols as you can.
Ironically, a holistic transformation program should have the goal of deploying the most homogeneous SOA environment possible; thus obviating the need for most of the much touted ESB’s transformation and mapping functions. In a new system, SOA can be based upon canonical formats and common protocols; thus minimizing the need for data and service format conversion. This goal is most feasible when applied to the message flows occurring in you internal ecosystem.
Now, you may still need some of those conversion functions for several other reasons, migration and integration with external systems being the most obvious cases. If the migration will be gradual, and therefore requires the interplay of new services with legacy services, go ahead and enable some of the protocol conversion features provided by ESBs. The question would then be how important this feature is to you, and whether you wouldn’t be better off following a non-ESB integration mechanism in the interim.  At least, knowing you will be using this particular ESB function only for migration purposes, you can try to negotiate a more generous license with the vendor.
There are cases whereby, while striving for a homogeneous SOA environment, you may well conclude that your end state architecture must integrate a number of systems under a  federated view. Your end state architecture in this case will be a mix of hybrid technologies servicing autonomous problem domains. Under this scenario, it would be best to reframe the definition of the problem at hand from one of creating an SOA environment to one of applying Enterprise Application Integration (EAI) mechanisms. If your end state revolves more around integration EAI, it would be better suited to performing boundary-level mapping and transformation work.  In this case, go and shop for a great EAI solution; not for an ESB.
If the vendor gives you the option of acquiring specific subsets of their ESB offering (at a reduced price) then that’s something worth considering. At the very least, you will need to provide support for service deployment, routing, monitoring, and management, even if you won’t require many of the other functions in the ESB package. Just remember to focus in deploying the fabric that properly matches your SOA objectives and not the one that matches your vendor’s sales quota.
A quick word regarding Open Source ESBs. . . There are many, but the same caveats I’ve used for vendor-based ESB’s apply. Open Source ESBs are not yet as mature, and the quality of functions they provide varies significantly according to the component. Focus on using only those components you can be sure will work in a reliable and stable manner or those which are not critical to the system. Remember you are putting in place components that will become part of the core fabric. Ask yourself, does it make sense in order to save a few dollars to use a relatively unsupported ESB component for a critical role (Service Invocation or Messaging, come to mind), versus using a more stable vendor solution?
In the end, if you are planning to use the protocol conversion features packaged in a vendor-provided or open source ESBs, I suggest you use them in a discrete, case-by-case basis, and not as an inherent component of your SOA fabric. This way, even as you face having to solve integration problems associated with the lack of standards, at least you won’t be forced into drinking the Kool-Aid associated with a particular vendor’s view of ESB!

Friday, November 6, 2009

The Orchestrators


Back in the XIX century (that’s the 19th century for all you X-geners!), there was a composer who didn’t know how to play the piano. In fact, nor did he know how to play the violin, the flute, the trombone, or any other instrument for that matter. Yet, the man managed to compose symphonies that to this day are considered musical masterpieces. The composer’s name was Louis Hector Berlioz, and he achieved this feat by directing the orchestra through each step of his arrangement and composition. His most recognized work is called “Symphonie Fantastique” and, according to Wikipedia, the symphony is scored for an orchestra consisting of 2 flutes(2nd doubling piccolo), 2 oboes (2nd doubling English horn), 2 clarinets (1st doubling E-flat clarinet), 4 bassoons, 4 horns, 2 trumpets, 2 cornets, 3 trombones, 2 ophicleides (what the heck is an ophecleide? A forerunner of the euphonium, I found out. What the heck is a euphonium? Well, check it out in Wiki!), 2 pairs of timpani, snare drum, cymbals, bass drum, bells in C and G, 2 harps, and strings.
By now, you probably get the idea. Mr. Berlioz fully exemplifies the ultimate back-end composite services element: The Orchestrator. Berlioz composed some pretty cool stuff by knowing a) what he wanted to express, b) what specific set of instruments should be used at a particular point in time, and c) how to communicate the notes of his composition to the orchestra.
Every SOA-based system needs its Berliozes.
There are several dimensions involved in defining the role of an orchestrator for SOA. First, as discussed earlier, most orchestrator roles will be provided within the context of an application; not as a part of a service. That is, the orchestration is what defines an application and makes one application different from another. The orchestration is the brain of the application, and it is the entity that decides the manner and SOA services calling flow.
In some instances, you might even be able to reuse orchestration patterns and apply them across multiple applications. Better still, you can build orchestration patterns by utilizing the emerging Business Process Modeling technologies (BPM). BPM simplifies the work of creating orchestration logic by providing a visual and modular way of assembling orchestration flows. A small commentary of mine is this: BPM is not SOA, but BPM requires SOA to work properly.
An apropos question is to ask how much orchestration should be automated in the SOA system as opposed to letting the user manually orchestrate his or her own interactions. To answer this question it is best to remember the complexity rule I stated earlier:  the simpler the user interaction; the more complex the system, and vice-versa. 
Then again, there are limits to the complexity of an orchestration. A full-fledged Artificial Intelligence system could become the ultimate orchestration engine but, unfortunately, such a machine remains in the realm of science fiction.  Cost-Benefit compromises must be made.
Say we have a travel oriented system and need to find the coolest vacation spots for the month of September. Should we let the user manually orchestrate the various steps needed to reach a conclusion? Each step would indirectly generate the appropriate service calls for searching destinations, filtering unwanted responses, obtaining additional descriptions, getting prices, initiating the booking, and so forth. Or we could consider developing a sophisticated orchestration function that’s able to take care of those details and do the hard work on behalf of the prospective traveler. But should we do it?
The answer lies in the size of “the market” for a particular need. Clearly, there is a need for a travel orchestration capability that can take care of all the details mentioned. After all, isn’t this why Travel Agencies emerged in the first place? If the orchestration is need by only a few users, then it is best not to spend money and effort attempting to automate something that is too unique. On the other hand, if the request becomes common, then it is preferable to create an automated orchestration function that organizes and integrates the use of SOA services.
The orchestrators design should always accommodate the transparency tenets in order to allow horizontal scalability. In other words, if you provide the orchestration via servers located in the system membrane, you will need to design the solution in such a way that you can always add more front end servers to accommodate increased workloads, without disrupting the orchestration processes in existing servers. Because orchestration usually requires the server to maintain some form of state, at least for the duration of a transaction, you will need to incorporate some form of session-stickiness in the orchestration logic. Later on, I will write more about why I recommend that this is the one and only area where a “session state” between the user and the orchestration should exist, even as I still advice to keep backend services discrete and sessionless.

Friday, October 30, 2009

The SOA Membrane as the Boundary Layer


Sooner or later it happens to most of us. We grow up and no longer can continue to live in the cocooned environment created by our parents—the comfort and coziness of our youth is gone (except if as a result of the Grand Recession you are obliged to return to your parents home and are forced to experience the George Constanza-like awkwardness of adulthood, but I digress). Either way, we have to enter the real world, a world where people speak the language of credits and debits and where behaviors are no longer governed by Ms. Manner’s etiquette or Mom’s nagging but rather by a set of complex social rules that help us interface with the world. The way we engage with the world, the set of rules we follow, the processes and mechanisms we use to interact with others, the whole cultural context of how to say “please”, or “keep the change”, are equivalent to a boundary layer between us and the rest of humanity.
Having created a suitable SOA system (either homogenous or federated via Enterprise Application Integration tools), we need to enclose it in its own protective cocoon, lest the reckless world outside trample with its internal fabric.  The trick is to prevent what is not wanted from getting in while allowing what is wanted to access the system. Here, biology provides us with an ideal model in the workings of the living cell. Just as the membrane of a healthy cell acts as a barrier against harmful agents and judiciously allows the exchange of the enzymes needed to make the cell work in concert with the rest of the organism, we must maintain an SOA membrane that allows the necessary information exchange to take place while keeping the bad guys out of the system.
In IT terms, the membrane is known as the DMZ (Demilitarized Zone). Frankly, I never cared for this term. A DMZ is a buffer zone designed to keep warring factions apart—a zone void of hostilities. The term is deceiving because, in reality, the DMZ is the area where all access battles are fought. Also, the layer’s role is not to keep warring factions apart but to allow the controlled exchange of participating partners. With the emergence of virtualization approaches such as Cloud Computing, we should take the perspective that the membrane is the region where safe trade occurs. In this area the presentation logic is placed alongside a number of public applications. This is the layer that deals with the Business-to-Consumer (B2C) and the Business-to-Business (B2B) interactions. In this layer you also must perform data transformations for data exchange with external entities.
In engineering terms the membrane consists of an arrangement of technologies carrying the interaction with the external world in each layer of the computing stack, from the security guard manning the entrance to the Data Center to the application displaying the sign-on screens. In the networking layer you have the protocol convertors, VPN gateways, IP routers and load balancers. Further up in the stack, the membrane includes firewalls with the appropriate system-level access passwords and permissions; including specific field-level encryption. Even higher up, the membrane contains the needed data mapping and conversion services. Moving on to the application space the membrane includes spam filters and user-level authentication mechanisms.
Rather than give a subliminal message, let me state it as loudly and plainly as a used car commercial before a Memorial-day sale:  it’s preferable to create the membrane with off-the-shelf technologies rather than to try to develop your own. The capabilities and features needed for this layer are usually standard to the industry, and thus it makes sense to use vendor solutions. In fact, a trend is to have many of the functions needed by the membrane be handled by special-purpose hardware appliances.
Alternatively, if you plan to outsource operations, then let the hosting provider worry about the make-up of the membrane. Still, you have to define the required levels of service and make certain the monitoring tools and processes exist to ensure these levels. Either way, the membrane is a component that’s rapidly becoming commoditized. A good thing too, for this is not where you ought to seek competitive IT differentiation (that is, unless you are one of those hosting providers!).
To sum up, the membrane is not the area to invest in internal development. The challenge is to create and view the membrane as an integrated component that can be managed and monitored in a holistic manner even if it consists of an assemblage of products and technologies. If you are creating a membrane you should focus on sound engineering work and vendor component integration; not software development.
Ultimately, a well-designed membrane should be trustworthy enough to allow some relaxation of the security levels inside the system. Also, a well-designed membrane should be flexible enough to allow support for a variety of access devices.  Once you take care of your system’s membrane you can then focus on what happens inside, where the real work takes place, with the Orchestrators.
This is next. . .

Friday, October 23, 2009

The Access Layer


Many who have been around long enough to remember the good old days of data processing may still long for the simplicity and maturity of centrally located mainframes which could be accessed via a simple line protocol from basic screen devices and keyboards at each client location. Older “dumb-terminals”, such as Teletypes, ICOT and 3270 devices simply captured the keystrokes which were then duly sent to the mainframe either in character-by-character mode or in blocks. The mainframe then centrally formatted the response which was then displayed by the access device in the same blind manner Charlie Chaplin hammered widgets in the assembly line of Modern Times.
For a period of time, with the advent of the PC back in the 80’s, a debate ensued about the idea of moving all processing to client devices. For a while, the pendulum swung towards having PCs do the computations, earning them the “fat clients” moniker. After enjoying the exhilarating freedom of not having to depend on the DP priesthood behind the central mainframe glass house, local IT departments began to learn what we all are supposed to have learned during our teenage years: with freedom come responsibilities. As it turned out, trying to keep PC software current with the never-ending stream of versions updates and configuration changes or trying to enforce corporate policies in this type of distributed environment, no matter how flimsy, soon became a Nightmare on IT Street.
Newton said it best: for every action there is always a reaction. Soon voices from the “other-side-of-the-pendulum” began to be heard. Mainly as a strategy to counter Microsoft which in the early nineties was still the eight-hundred pound gorilla that Google is today, the folks at Sun Microsystems began pushing for the “Network Computer” concept. This was in reality a cry for the dumb terminals of yore; only this time designating the Java Virtual Machine as the soul of the distributed machine.  To be fair, given the maintenance burden presented by millions of PCs requiring continuous Windows upgrades, these network computers did make some sense. After all, network computers were actually capable of executing applications autonomously from the central system and thus were not strictly the same as old-fashioned “dumb-terminals”. 
In the end, the pendulum did swing back towards Thin Clients. Enter the original Web Browser. This time the appeal of Web Browsers was that thanks to Tim Bernes-Lee, the inventor of the Web, they accelerated the convergence of technology platforms around a standardized access layer. Whereas in the past each company might have used proprietary access technologies, or even proprietary interfaces, web browsers became a de-facto standard. The disadvantage was that, well, we were once again dealing with a very limited set of client level capabilities. The narrow presentation options provided by HTML limited the interface usability. Java Applets solved this constraint somewhat but then ironically increased the “thickness” of the client as programmers tended to put more processing within the Applet. Thankfully we are now reaching the point where we can strike the proper balance between “thinness” and “thickness” via the use of Cascading Style Sheets and, more recently, Ajax and Dojo.
Now, a word about two of today’s most popular client access solutions: Proprietary Multimedia extensions, such as Macromedia Flash and what I refer to as “Dumb Terminal” emulators, such as Citrix.  Using Macromedia Flash is great if you are interested in displaying cool animations, enhanced graphics and such.  It is fine to use languages such as Action Script for basic input field verification and simple interface manipulation (i.e. sorting fields for presentation, etc.), but writing any sort of business logic with these languages is an invitation to create code that will be very difficult to maintain.  Business logic should always be contained in well-defined applications, ideally located in a server under proper operational management. 
Technologies such as Citrix basically allow the execution of “Fat Client” applications under a “Thin Client” framework by “teleporting” the Windows-based input and output under the control of a remote browser. My experience is that this approach makes sense only under very specific tactical or niche needs such as during migrations or when you need to make a rare Windows-based application available to remote locations that lack the ability to host the software.  Citrix software has been used successfully to enable rich interfaces for web-based meeting applications (GoToMeeting) when there is a need to display a user’s desktop environment via a browser, or when users want to exercise remote control of their own desktops. Other than in these special cases, I recommend not basing the core of your client strategy around these types of dedicated technologies. Remember, when it comes to IT Transformation you should favor open standards and the use of tools that are based on sound architecture principles rather than on strong vendor products.
As a close to this discussion on Access technologies; just as we no longer debate the merits of one networking technology over another, network technologies have become commoditized. I suspect we will soon move the discussion of Access technologies to a higher level, rather than debating the specific enabling technology to be used. Access-level enabling technologies such as Ajax and others are becoming commodity standards that will support a variety of future access devices in a very seamless fashion.  So, pull out your mobile phone, your electronic book reader, and bring your Netbook, or laptop, or access your old faithful PC, or turn on your videogame machine, if you don’t want to fetch your HDTV remote control. It behooves you in this new world of IT Transformation to make it all work just the same!

Friday, October 16, 2009

The SOA Framework




Early Ford Model Ts were the most successful automobile for a good portion of the twentieth century. Millions of Model Ts roamed the roads of America, and if you had opened the hood of one of them, you would have found a very basic machine design consisting of an engine, a magneto (similar to an alternator) and perhaps a battery.
In contrast, when looking under the hood of a modern car, it’s easy to be bewildered by its complexity.  With their fuel-injection systems, anti-lock brakes, intelligent steering systems, safety mechanism, and many other features, a modern car can better be described as a computerized mobility machine. About the only thing Model Ts have in common with modern cars is the fact that they both move.
Trying to explain the workings of a new a vehicle in terms of 1920’s terminology is almost impossible. Such an explanation requires the use of a new language. The same is true for SOA. The traditional computing paradigm of centralized mainframe-based processing represents the Model T of computing, and designing and explaining SOA, even if only to represent another computer environment, requires a new language.
This new language would have more in common with, say, the language used to describe a Broadway play or the workings of  interacting organisms in biology than with the language used to describe the original computing paradigms (a “computer”, after all, was a term originally used for the female staff in charge of manually performing census calculations). In this new language you have actors playing the roles of specific services, a script to define the storyline and the orchestrators to execute it.  SOA is a play; not a monologue.
Still, regardless of the internal workings, a new car still requires the existence of a command console, an engine, and wheels and chassis.  SOA can be defined by the Presentation, Processing, and Data Layers. The Presentation occurs in the Access space, and the interface could be viewed as a “membrane” enclosing the system. The Processing layer provides the orchestration of services and the Data represents the stuff that makes it all worthwhile.
Remember, the SOA meshed diagram I showed you earlier?



The diagram gives a somewhat chaotic and anarchic representation of the manner in which a truly distributed service oriented environment operates. It behooves us to impose some order and structure so that the actual SOA system is something we can implement and operate appropriately.  I refer to this structure as “The Framework”; the following are its elements:
·         The Access.  No matter the system, the objective is to ultimately interface with humans. I spoke early on about possible interface technologies in the future, from 3D Virtual Telepresence to technologies that can be implanted in our bodies to extend our senses in a seamless way. We are already living in a time where the access mechanism is becoming irrelevant to the engagement experience. You can check out your Facebook wall from a PC, a cell phone, an iPod, or a game console.
·         The Membrane. If we can envision a world in which we utilize a variety of access devices, we can also envision their touch points as a membrane. The advent of cloud computing already provides the cloud as a metaphor, but the cloud metaphor serves best in depicting the manner in which virtualized computer systems are integrated as a whole working unit. The membrane represents the interface to the information services.
·         The Orchestrator. This is what I like to call “The Wizard of Oz Booth”. The magic behind the curtain is represented by the process rules, information gathering decisions, and alternative workflows evaluated and chosen by the orchestrator.
·         The Fabric. There is no civilization without infrastructure. Indeed, many could argue that civilization is infrastructure.  And what’s infrastructure? Anything that we can bury beneath the ground, that is not dead, and that provides a service in as transparent a fashion as possible is infrastructure. However, I chose the term Fabric because this term better conveys the dynamic nature of the supporting infrastructure. Fabric has two connotations, one as an entity producing goods, and the other as the material substance that forms SOA.
·         The Data Keeper. In a proper SOA environment, even data should be abstracted to be accessed as a service. Similar to the role of your high school librarian, you need a formalized Data Keeper responsible for abstracting the access and maintenance of data to ensure no one has to worry about such details as to whether data is being stored in old Phoenician tablets, Egyptian papyrus, Monastic scriptures, ferromagnetic storage, or any of the modern or future ways data is to be stored.
In the future everything will be virtual, an abstraction. Enabling this capability is the role of the SOA Framework. Next I will describe in detail each of the previous SOA Framework elements.