Friday, October 23, 2009
The Access Layer
Many who have been around long enough to remember the good old days of data processing may still long for the simplicity and maturity of centrally located mainframes which could be accessed via a simple line protocol from basic screen devices and keyboards at each client location. Older “dumb-terminals”, such as Teletypes, ICOT and 3270 devices simply captured the keystrokes which were then duly sent to the mainframe either in character-by-character mode or in blocks. The mainframe then centrally formatted the response which was then displayed by the access device in the same blind manner Charlie Chaplin hammered widgets in the assembly line of Modern Times.
For a period of time, with the advent of the PC back in the 80’s, a debate ensued about the idea of moving all processing to client devices. For a while, the pendulum swung towards having PCs do the computations, earning them the “fat clients” moniker. After enjoying the exhilarating freedom of not having to depend on the DP priesthood behind the central mainframe glass house, local IT departments began to learn what we all are supposed to have learned during our teenage years: with freedom come responsibilities. As it turned out, trying to keep PC software current with the never-ending stream of versions updates and configuration changes or trying to enforce corporate policies in this type of distributed environment, no matter how flimsy, soon became a Nightmare on IT Street.
Newton said it best: for every action there is always a reaction. Soon voices from the “other-side-of-the-pendulum” began to be heard. Mainly as a strategy to counter Microsoft which in the early nineties was still the eight-hundred pound gorilla that Google is today, the folks at Sun Microsystems began pushing for the “Network Computer” concept. This was in reality a cry for the dumb terminals of yore; only this time designating the Java Virtual Machine as the soul of the distributed machine. To be fair, given the maintenance burden presented by millions of PCs requiring continuous Windows upgrades, these network computers did make some sense. After all, network computers were actually capable of executing applications autonomously from the central system and thus were not strictly the same as old-fashioned “dumb-terminals”.
In the end, the pendulum did swing back towards Thin Clients. Enter the original Web Browser. This time the appeal of Web Browsers was that thanks to Tim Bernes-Lee, the inventor of the Web, they accelerated the convergence of technology platforms around a standardized access layer. Whereas in the past each company might have used proprietary access technologies, or even proprietary interfaces, web browsers became a de-facto standard. The disadvantage was that, well, we were once again dealing with a very limited set of client level capabilities. The narrow presentation options provided by HTML limited the interface usability. Java Applets solved this constraint somewhat but then ironically increased the “thickness” of the client as programmers tended to put more processing within the Applet. Thankfully we are now reaching the point where we can strike the proper balance between “thinness” and “thickness” via the use of Cascading Style Sheets and, more recently, Ajax and Dojo.
Now, a word about two of today’s most popular client access solutions: Proprietary Multimedia extensions, such as Macromedia Flash and what I refer to as “Dumb Terminal” emulators, such as Citrix. Using Macromedia Flash is great if you are interested in displaying cool animations, enhanced graphics and such. It is fine to use languages such as Action Script for basic input field verification and simple interface manipulation (i.e. sorting fields for presentation, etc.), but writing any sort of business logic with these languages is an invitation to create code that will be very difficult to maintain. Business logic should always be contained in well-defined applications, ideally located in a server under proper operational management.
Technologies such as Citrix basically allow the execution of “Fat Client” applications under a “Thin Client” framework by “teleporting” the Windows-based input and output under the control of a remote browser. My experience is that this approach makes sense only under very specific tactical or niche needs such as during migrations or when you need to make a rare Windows-based application available to remote locations that lack the ability to host the software. Citrix software has been used successfully to enable rich interfaces for web-based meeting applications (GoToMeeting) when there is a need to display a user’s desktop environment via a browser, or when users want to exercise remote control of their own desktops. Other than in these special cases, I recommend not basing the core of your client strategy around these types of dedicated technologies. Remember, when it comes to IT Transformation you should favor open standards and the use of tools that are based on sound architecture principles rather than on strong vendor products.
As a close to this discussion on Access technologies; just as we no longer debate the merits of one networking technology over another, network technologies have become commoditized. I suspect we will soon move the discussion of Access technologies to a higher level, rather than debating the specific enabling technology to be used. Access-level enabling technologies such as Ajax and others are becoming commodity standards that will support a variety of future access devices in a very seamless fashion. So, pull out your mobile phone, your electronic book reader, and bring your Netbook, or laptop, or access your old faithful PC, or turn on your videogame machine, if you don’t want to fetch your HDTV remote control. It behooves you in this new world of IT Transformation to make it all work just the same!