Friday, October 30, 2009

The SOA Membrane as the Boundary Layer


Sooner or later it happens to most of us. We grow up and no longer can continue to live in the cocooned environment created by our parents—the comfort and coziness of our youth is gone (except if as a result of the Grand Recession you are obliged to return to your parents home and are forced to experience the George Constanza-like awkwardness of adulthood, but I digress). Either way, we have to enter the real world, a world where people speak the language of credits and debits and where behaviors are no longer governed by Ms. Manner’s etiquette or Mom’s nagging but rather by a set of complex social rules that help us interface with the world. The way we engage with the world, the set of rules we follow, the processes and mechanisms we use to interact with others, the whole cultural context of how to say “please”, or “keep the change”, are equivalent to a boundary layer between us and the rest of humanity.
Having created a suitable SOA system (either homogenous or federated via Enterprise Application Integration tools), we need to enclose it in its own protective cocoon, lest the reckless world outside trample with its internal fabric.  The trick is to prevent what is not wanted from getting in while allowing what is wanted to access the system. Here, biology provides us with an ideal model in the workings of the living cell. Just as the membrane of a healthy cell acts as a barrier against harmful agents and judiciously allows the exchange of the enzymes needed to make the cell work in concert with the rest of the organism, we must maintain an SOA membrane that allows the necessary information exchange to take place while keeping the bad guys out of the system.
In IT terms, the membrane is known as the DMZ (Demilitarized Zone). Frankly, I never cared for this term. A DMZ is a buffer zone designed to keep warring factions apart—a zone void of hostilities. The term is deceiving because, in reality, the DMZ is the area where all access battles are fought. Also, the layer’s role is not to keep warring factions apart but to allow the controlled exchange of participating partners. With the emergence of virtualization approaches such as Cloud Computing, we should take the perspective that the membrane is the region where safe trade occurs. In this area the presentation logic is placed alongside a number of public applications. This is the layer that deals with the Business-to-Consumer (B2C) and the Business-to-Business (B2B) interactions. In this layer you also must perform data transformations for data exchange with external entities.
In engineering terms the membrane consists of an arrangement of technologies carrying the interaction with the external world in each layer of the computing stack, from the security guard manning the entrance to the Data Center to the application displaying the sign-on screens. In the networking layer you have the protocol convertors, VPN gateways, IP routers and load balancers. Further up in the stack, the membrane includes firewalls with the appropriate system-level access passwords and permissions; including specific field-level encryption. Even higher up, the membrane contains the needed data mapping and conversion services. Moving on to the application space the membrane includes spam filters and user-level authentication mechanisms.
Rather than give a subliminal message, let me state it as loudly and plainly as a used car commercial before a Memorial-day sale:  it’s preferable to create the membrane with off-the-shelf technologies rather than to try to develop your own. The capabilities and features needed for this layer are usually standard to the industry, and thus it makes sense to use vendor solutions. In fact, a trend is to have many of the functions needed by the membrane be handled by special-purpose hardware appliances.
Alternatively, if you plan to outsource operations, then let the hosting provider worry about the make-up of the membrane. Still, you have to define the required levels of service and make certain the monitoring tools and processes exist to ensure these levels. Either way, the membrane is a component that’s rapidly becoming commoditized. A good thing too, for this is not where you ought to seek competitive IT differentiation (that is, unless you are one of those hosting providers!).
To sum up, the membrane is not the area to invest in internal development. The challenge is to create and view the membrane as an integrated component that can be managed and monitored in a holistic manner even if it consists of an assemblage of products and technologies. If you are creating a membrane you should focus on sound engineering work and vendor component integration; not software development.
Ultimately, a well-designed membrane should be trustworthy enough to allow some relaxation of the security levels inside the system. Also, a well-designed membrane should be flexible enough to allow support for a variety of access devices.  Once you take care of your system’s membrane you can then focus on what happens inside, where the real work takes place, with the Orchestrators.
This is next. . .

Friday, October 23, 2009

The Access Layer


Many who have been around long enough to remember the good old days of data processing may still long for the simplicity and maturity of centrally located mainframes which could be accessed via a simple line protocol from basic screen devices and keyboards at each client location. Older “dumb-terminals”, such as Teletypes, ICOT and 3270 devices simply captured the keystrokes which were then duly sent to the mainframe either in character-by-character mode or in blocks. The mainframe then centrally formatted the response which was then displayed by the access device in the same blind manner Charlie Chaplin hammered widgets in the assembly line of Modern Times.
For a period of time, with the advent of the PC back in the 80’s, a debate ensued about the idea of moving all processing to client devices. For a while, the pendulum swung towards having PCs do the computations, earning them the “fat clients” moniker. After enjoying the exhilarating freedom of not having to depend on the DP priesthood behind the central mainframe glass house, local IT departments began to learn what we all are supposed to have learned during our teenage years: with freedom come responsibilities. As it turned out, trying to keep PC software current with the never-ending stream of versions updates and configuration changes or trying to enforce corporate policies in this type of distributed environment, no matter how flimsy, soon became a Nightmare on IT Street.
Newton said it best: for every action there is always a reaction. Soon voices from the “other-side-of-the-pendulum” began to be heard. Mainly as a strategy to counter Microsoft which in the early nineties was still the eight-hundred pound gorilla that Google is today, the folks at Sun Microsystems began pushing for the “Network Computer” concept. This was in reality a cry for the dumb terminals of yore; only this time designating the Java Virtual Machine as the soul of the distributed machine.  To be fair, given the maintenance burden presented by millions of PCs requiring continuous Windows upgrades, these network computers did make some sense. After all, network computers were actually capable of executing applications autonomously from the central system and thus were not strictly the same as old-fashioned “dumb-terminals”. 
In the end, the pendulum did swing back towards Thin Clients. Enter the original Web Browser. This time the appeal of Web Browsers was that thanks to Tim Bernes-Lee, the inventor of the Web, they accelerated the convergence of technology platforms around a standardized access layer. Whereas in the past each company might have used proprietary access technologies, or even proprietary interfaces, web browsers became a de-facto standard. The disadvantage was that, well, we were once again dealing with a very limited set of client level capabilities. The narrow presentation options provided by HTML limited the interface usability. Java Applets solved this constraint somewhat but then ironically increased the “thickness” of the client as programmers tended to put more processing within the Applet. Thankfully we are now reaching the point where we can strike the proper balance between “thinness” and “thickness” via the use of Cascading Style Sheets and, more recently, Ajax and Dojo.
Now, a word about two of today’s most popular client access solutions: Proprietary Multimedia extensions, such as Macromedia Flash and what I refer to as “Dumb Terminal” emulators, such as Citrix.  Using Macromedia Flash is great if you are interested in displaying cool animations, enhanced graphics and such.  It is fine to use languages such as Action Script for basic input field verification and simple interface manipulation (i.e. sorting fields for presentation, etc.), but writing any sort of business logic with these languages is an invitation to create code that will be very difficult to maintain.  Business logic should always be contained in well-defined applications, ideally located in a server under proper operational management. 
Technologies such as Citrix basically allow the execution of “Fat Client” applications under a “Thin Client” framework by “teleporting” the Windows-based input and output under the control of a remote browser. My experience is that this approach makes sense only under very specific tactical or niche needs such as during migrations or when you need to make a rare Windows-based application available to remote locations that lack the ability to host the software.  Citrix software has been used successfully to enable rich interfaces for web-based meeting applications (GoToMeeting) when there is a need to display a user’s desktop environment via a browser, or when users want to exercise remote control of their own desktops. Other than in these special cases, I recommend not basing the core of your client strategy around these types of dedicated technologies. Remember, when it comes to IT Transformation you should favor open standards and the use of tools that are based on sound architecture principles rather than on strong vendor products.
As a close to this discussion on Access technologies; just as we no longer debate the merits of one networking technology over another, network technologies have become commoditized. I suspect we will soon move the discussion of Access technologies to a higher level, rather than debating the specific enabling technology to be used. Access-level enabling technologies such as Ajax and others are becoming commodity standards that will support a variety of future access devices in a very seamless fashion.  So, pull out your mobile phone, your electronic book reader, and bring your Netbook, or laptop, or access your old faithful PC, or turn on your videogame machine, if you don’t want to fetch your HDTV remote control. It behooves you in this new world of IT Transformation to make it all work just the same!

Friday, October 16, 2009

The SOA Framework




Early Ford Model Ts were the most successful automobile for a good portion of the twentieth century. Millions of Model Ts roamed the roads of America, and if you had opened the hood of one of them, you would have found a very basic machine design consisting of an engine, a magneto (similar to an alternator) and perhaps a battery.
In contrast, when looking under the hood of a modern car, it’s easy to be bewildered by its complexity.  With their fuel-injection systems, anti-lock brakes, intelligent steering systems, safety mechanism, and many other features, a modern car can better be described as a computerized mobility machine. About the only thing Model Ts have in common with modern cars is the fact that they both move.
Trying to explain the workings of a new a vehicle in terms of 1920’s terminology is almost impossible. Such an explanation requires the use of a new language. The same is true for SOA. The traditional computing paradigm of centralized mainframe-based processing represents the Model T of computing, and designing and explaining SOA, even if only to represent another computer environment, requires a new language.
This new language would have more in common with, say, the language used to describe a Broadway play or the workings of  interacting organisms in biology than with the language used to describe the original computing paradigms (a “computer”, after all, was a term originally used for the female staff in charge of manually performing census calculations). In this new language you have actors playing the roles of specific services, a script to define the storyline and the orchestrators to execute it.  SOA is a play; not a monologue.
Still, regardless of the internal workings, a new car still requires the existence of a command console, an engine, and wheels and chassis.  SOA can be defined by the Presentation, Processing, and Data Layers. The Presentation occurs in the Access space, and the interface could be viewed as a “membrane” enclosing the system. The Processing layer provides the orchestration of services and the Data represents the stuff that makes it all worthwhile.
Remember, the SOA meshed diagram I showed you earlier?



The diagram gives a somewhat chaotic and anarchic representation of the manner in which a truly distributed service oriented environment operates. It behooves us to impose some order and structure so that the actual SOA system is something we can implement and operate appropriately.  I refer to this structure as “The Framework”; the following are its elements:
·         The Access.  No matter the system, the objective is to ultimately interface with humans. I spoke early on about possible interface technologies in the future, from 3D Virtual Telepresence to technologies that can be implanted in our bodies to extend our senses in a seamless way. We are already living in a time where the access mechanism is becoming irrelevant to the engagement experience. You can check out your Facebook wall from a PC, a cell phone, an iPod, or a game console.
·         The Membrane. If we can envision a world in which we utilize a variety of access devices, we can also envision their touch points as a membrane. The advent of cloud computing already provides the cloud as a metaphor, but the cloud metaphor serves best in depicting the manner in which virtualized computer systems are integrated as a whole working unit. The membrane represents the interface to the information services.
·         The Orchestrator. This is what I like to call “The Wizard of Oz Booth”. The magic behind the curtain is represented by the process rules, information gathering decisions, and alternative workflows evaluated and chosen by the orchestrator.
·         The Fabric. There is no civilization without infrastructure. Indeed, many could argue that civilization is infrastructure.  And what’s infrastructure? Anything that we can bury beneath the ground, that is not dead, and that provides a service in as transparent a fashion as possible is infrastructure. However, I chose the term Fabric because this term better conveys the dynamic nature of the supporting infrastructure. Fabric has two connotations, one as an entity producing goods, and the other as the material substance that forms SOA.
·         The Data Keeper. In a proper SOA environment, even data should be abstracted to be accessed as a service. Similar to the role of your high school librarian, you need a formalized Data Keeper responsible for abstracting the access and maintenance of data to ensure no one has to worry about such details as to whether data is being stored in old Phoenician tablets, Egyptian papyrus, Monastic scriptures, ferromagnetic storage, or any of the modern or future ways data is to be stored.
In the future everything will be virtual, an abstraction. Enabling this capability is the role of the SOA Framework. Next I will describe in detail each of the previous SOA Framework elements.

Friday, October 9, 2009

The Service Interfaces


Do you want to watch TV? Grab the remote control and press the ON button. To mute the sound, press Mute. Simple. The service interfaces represent a binding contract between the service provider and the client. Ideally, the contract will be as generic as possible so that it can be flexible and you won’t need to change it for trivial reasons.  On the other hand, you have to ensure the interface is complete and sufficiently specific to ensure there are no ambiguities during the request and the response.
The contract should not assume any explicit knowledge between the client and the service provider. In other words, the more abstracted and decoupled the interfaces are between the client and the server, the better.  Imagine if every time you drove to a fast-food window you were expected to order the meal differently depending on who was taking the order.
Web services have gained quick acceptance because they rely on high level interfaces like XML. SOAP (Service Oriented Architecture Protocol) improves the situation even more by enforcing an interface view based upon WSDL (Web Services Description Language) as opposed to a view based upon data structures. Other approaches such as REST (Representational State Transfer) utilize the Web stack to provide suitable abstracted interfaces. However, regardless of the specific interface semantics, the point remains: a good interface should completely decouple the HOW a service provider works from the WHAT the service is offering. In the end, the client of the service doesn’t care whether the TV channel is changed via electronic circuitry or via a little gnome living inside the television (an uncomfortable situation for gnomes these days thanks to the advent of flat screens!).
But returning to our restaurant metaphor. . . You have probably been in one of those fast-food places where you can enter your order via a touch-screen. The result is that instead of having an $8/hour employee take your order, you have an $8/hour employer behind the Kiosk guiding you on how to input the order, and probably making you feel like an ignoramus. Unlike ordering your meal from a human being, using a touch-screen exposes some of the intrinsic processes used by the restaurant, and forces you to follow a specific, usually awkward flow, while ordering. This is one of the reasons touch-screens to order meals have failed to really take hold and, analogously, it’s a reason that an older “SOA Protocol” like CORBA (Common Object Request Broker Architecture) failed to catch-on as well. As with the touch-screen example, CORBA forced the client to match the server interface in a way that was not sufficiently transparent. Similarly, we cannot rightly consider remote object invocation protocols such as RMI (Remote Method Invocation) or the analogous RPC/XML (Remote Procedure Call with XML) to provide true SOA interfaces. These protocols force the client to make assumptions about the object methods and data types, while failing to properly hide the implementation of the service such as the way the called “service” represented by the object is constructed or initiated, and the way various data types are handled.
The difference between a service and a function is subtle, but the way to disambiguate it is clear: If the “function” being called is potentially required to be placed in a separate environment or can be provided by a separate vendor, then it should be defined as a service.  Yes, RMI/Java APIs are okay for “local services”, but beware of this terminology. If you recall the transparency credo, you know that talking about “local” services is a mistake. If you intend to create a true service, then I suggest you expose it properly from its inception. As such, it should always be exposed as a decoupled service with a truly abstracted and portable interface.
Remote Object Invocation and other function-level interfaces fail to meet the implementation transparency credo required by SOA, making the resulting “service-less” SOA system as pointless as decaffeinated coffee or alcohol-free beer.
While some might argue the “merits” in using RMI or RMI-like protocols to improve SOA performance, this performance improvement, if any, comes at the cost of flexibility.  Why? The moment you have to grow the system and try to convert the “local” service into a real service you are bound to face unnecessary decoupling work. This stage of the design process is not where we should be worrying about performance. Creating a function where a service is needed simply to avoid “the overhead” of XML or SOAP is not an appropriate way to design (in any case, said overhead can be minor when using coarse-grained services). Define the services you need first, and then you can focus on streamlining their performance.
Yes, there is a role and a place for RMI and Object Interfaces when you are certain you are creating a function and not a service. Functions are usually fine-grained and can certainly be used for specific intra-system calls to shared common objects. But the bottom-line is this: in case of doubt, use real SOA interfaces.
The beauty of respecting the transparency credo and enforcing the abstraction layer provided by properly laid down service interfaces is that you will then be in a position to leverage the tremendous powers that the underlying service framework provides in rapidly leveraging service ecosystems for the quick delivery of solutions.
More on this next.

Friday, October 2, 2009

On the Granularity of Services


You’re seated in a fancy restaurant ready to enjoy a nice gourmet meal.  The waiter shows up with the menu, but instead of a list of entrees and appetizers, you are confronted with a catalogue of recipes. You order a Tuna Tartare as appetizer. The waiter stares at you with a bewildered expression on his face. “Pardon?” he asks. “I’d like a Tuna Tartare,” you insist. He doesn’t understand and it finally hits you, he’s expecting you to guide him through each step of the recipe. “Heck,” you think, this must be some kind of novelty gimmick, like Kramer’s make-your-own-pizza idea in a classic Seinfeld episode, and so you begin the painstaking process of preparing for the appetizer:
“Please get 3 ¾ pounds of very fresh tuna. Dice the tuna into 1/4-inch cubes and place it in a large bowl.” The waiter scribbles furiously. “Got this part, sir, I’ll be right back!” he says as he dashes to the kitchen to begin preparing your order.
Reading from the menu, you continue when he returns by requesting that he combine1 ¼ cups of olive oil, 5 limes zests grated and 1 cup of freshly squeezed lime juice in a separate bowl. He runs back to the kitchen before you get a chance to tell him to also add wasabi, soy sauce, hot red pepper sauce, salt, and pepper to the bowl. . .
You get the idea.  There are different ways to ask for services. Let’s think of a more realistic computer design choice. Say you need to calculate the day of the week (What day does 10/2/2009 falls on?). If you were to define “Calculate-Day-of-the-Week” as a service, then you would be expected to allow this service to run in any computer, anywhere in the world (remember the transparency credo I covered earlier!), and to be reachable via a decoupled interface call.  If you were to answer, “Okay! No problem”, I would have to then ask you whether this is actually a sensible option. What would be the potential performance impact of having to reach out to a distant computer every time a day of the week calculation is needed?
Remembering the definition of services that I provided earlier, you insist that “Calculate-Day-of-the-Week” is definitely a service that provides a direct business value.
For SOA purposes a service represents a unit of work that a non-technical person can understand as providing a valuable stand-alone capability
You can argue that “Calculate-Day-of-the-Week” is in fact a unit of work that the salesperson, a non-technical person, can understand and that she will need to access with her Blackberry. In that case, I would then yield to the argument because you have shown that the calculation has business logic that is relevant to your company.
If, on the other hand, “Calculate-Day-of-the-Week” is needed only by programmers, and there is no requirement for it to be directly accessed by anyone in the business group, then this is something that should be handled as a programming function and not as a service. 
If the reason “Calculate-Day-of-the-Week” is needed is because the calculation is part of a broader computation, say to find out whether a discount applies to a purchase (“10% off on Wednesdays!”), then the real service ought to be “Determine-Discount” and not a day of week calculation. You see, defining what constitutes a service can be somewhat subjective.
Your team should apply similar reasoning when determining services: Calculating the hash value of a field is a function; not a service.  Obtaining passenger information from an airline reservation system is a service, but appending the prefix “Mr.” or “Ms.” to a name should not be considered a service.
Now, to be fair, there will always be those fuzzy cases that will demand your architecture team to make a call on a case-by-case basis.  If obtaining a customer name is needed for a given business flow, then it can be considered a service. However, if obtaining the customer name is part of a business process that is a part of assembling all customer information (address, phone number, etc.) you should really have a “Get-Customer-Information” service so as not to oblige the client to request each information field separately. 
In general, when it comes to services, it is better to start with fewer, coarser services and then move on to less coarse services on a need by need basis. In other words, it’s better to err on the side of being coarse than to immediately expose services that are too granular. It’s ultimately all about using common sense. Remember the restaurant example. When you order food in a restaurant it’s better to simply look at the menu and order a dish by its name.
Finally, even if a function is determined not to be a service, and therefore does not need to be managed with the more comprehensive life-cycle process used for services, there is no excuse for not following best-practices when implementing it. Just as with services, make certain the function is reusable, that it does not have unnecessary inter-dependencies, and that it is well tested. You never know when you may need to elevate a function to become a service.
But most importantly, the secret sauce in this SOA recipe is the interface: both, services and functions must have well defined interfaces.
More on this next week!