Saturday, December 4, 2010

The Web 3.0 Myth and the Emergence of New Channels

In a scene from one of my favorite movies, the cult mockumentary, “Spinal Tap”, the lead guitarist proudly brags about his amplifier. “This one is the best gear, there is,” he says. “Why?” the off-camera voice asks. “Cause this one’s volume knob goes all the way to 11, see?”

The guitarist goes on to explain that on most amplifiers the volume control has only ten notches, while this one has 11.

When I hear the term Web 3.0 being used, this Spinal Tap scene always comes to mind. “Web 3.0 is better ‘cause 3 is higher than 2, see?”

When the WWW was invented, the first web pages were basic experimental pages of the “Hello World” variety. Soon pages began to sprout up with an informational focus—simple descriptions about the owner of the page with some images—to the point that by 1995, the vast majority of Web pages were the equivalent of the About Us section on most web sites.

Search engines and web indexers began to appear too. Soon, Web sites began to exhibit a measure of interactivity. Users were able to enter their information on forms and submit them to the web owner. The next logical step was transaction oriented services. This saw the emergence of travel reservation sites and the beginning of serious online commerce. The rest is more contemporary history. Someone came up with the brilliant marketing term “Web 2.0” to highlight the emergence of social networking sites. Not that Web 2.0 represented a true technological breakthrough. When you think about it, sites like Facebook and Linkedit are essentially template-driven, personal web pages that can get created and maintained without the user having to learn stuff like HTML or XML. So, in summary the progression of the Web has been as follows:

· First Wave: “Look at me.” In this wave the early adopters created quick pages by directly editing HTML files and entering fairly innocuous entries intended to establish a presence.

· Second Wave: “Let me Inform You.” Some advanced companies and most of academia published their web pages with descriptions and bibliographies intended to inform their readers. Around this time, the rush to grab domain names began.

· Third Wave: “Please tell me.” Forms began to be used for the purpose of asking the reader to enter his name and contact information or to provide comments for follow up.

· Fourth Wave: “Transact with me.” Some audacious companies began to expose their product inventory. The gradual adoption of the SSL (Secure Socket Layer) protocols combined with web encryption enabled people to trust the Internet as a carrier of credit card information.

· Fifth Wave (Web 2.0): Get Involved. Initially, sites facilitating the creation of Blogs made it easier for the more extroverted among us to begin publishing our tales without having to actually understand the technical elements of Web page construction. Sites like MySpace, Friendster, Linkedin, and Facebook took this paradigm further by creating communities that enabled people to expose their likes and dislikes, profiles, and comments in a structured fashion.

What will the next Web wave be then? The putative Web 3.0? I don’t believe so. There will not be a Web 3.0 anymore that there truly was a Web 2.0. However you care to define this eleventh notch for the Web, the fact is that the Web is in the process of becoming a hidden commodity, a utility like TCP/IP, the networking protocol used to power the Internet.

Just as the Cambrian era saw the rapid proliferation of diverse sea creatures undergoing an evolutionary frenzy, we are now witnessing the emergence of the “End Point as a Channel” phenomena. For example, until recently, making web content available to cellular phones used to be an after-thought. Nowadays, the popularity of smart-phones, whether iPhone, Android, Blackberry or Windows based, is making it obvious that these devices represent a brand new distribution channel. Where we once had Web pages we now have “Apps”. The burgeoning success of emerging Tablet devices will only shift this paradigm even more.

In a recent article, Tim Bernes-Lee, the inventor of the Web, expressed concerns that Facebook, iTunes and other social networking sites were counter-currents to the WWW, acting as walled-garden sites and ultimately running against the philosophy of openness and sharing that underpins the Web. True, the earlier AOL was a walled-gardened community that became obliterated by the emergence of the open worldwide web, but it now seems that we have come full circle. Closed communities are the “in” thing, and the Web is seen only as the common ground. You may even have noticed that some companies no longer put their web site URL in their advertisements, preferring to list their Facebook page instead. Still, this does not mean that the Web will go away; nor that email is likely to disappear, despite the recent claims that it is being used less and less due to the heightened use of internal messaging systems available on social sites. Instead, we should view the recent emergence of “Social Networking” sites and other content delivery mechanisms as the new apex in the IT services pyramid.

What has actually been happening is that, as the Web has become a commoditized infrastructure component, it is no longer the one information channel. The Web is now simply the distribution channel of reference amongst the many channels in the exploding variety of information and distribution channels. The diagram below depicts this paradigm shift.

What does all this mean to you as a system architect? Well, basically it reinforces the importance of creating a layered services oriented architecture that allows you to support, with a minimum of effort, any channel the world throws at you. Aside from the presentation layer, you should avoid developing channel-specific components. Content Management, Merchandiser Engines, Security Servers, and all other backend infrastructure should be capable of operating on a channel-agnostic manner regardless. In addition, this new world reinforces the value of exposing the backend services via SOA and the importance of establishing the right infrastructure, capable of supporting a variety of SLAs and security modes at the boundary between the internal systems and the exploding channel zoo. Aren’t you glad to moved to transform your IT systems after all?

Friday, November 19, 2010

Information Distillers, Aggregators & Your Electronic “Mini-mes”

As the years ahead move us toward the enabling of understanding and wisdom, we should expect an increase in the commercially available services leveraging these new automation models. For example, consulting is already an embedded part of services provided by professionals, but in the future, consulting will evolve into a set of online services provided via moderated access to human experts or via the access to software-based expert systems. Whether they are made of flesh or metal, these will be bona-fide Information Distillers will always be ready to augment your thirst for information at the push of the button and the opening of your Pay-Pal wallet.

Emerging Information Distillers will successfully locate and turn the required information into “understandable bits” which can be digested by customers under several revenue models. While in principle these services are not fundamentally different from those provided by traditional consulting entities such as the Gartner Group or your corner H&R Block, the difference is that they will be democratized and available to all—individuals and companies alike. For example, in travel, distillers will not only publish travel magazines (electronically or via hard-copy), but will also package tours and offer special negotiated travel deals. (Tripadvisor.com can be seen as a first generation distiller leveraging the power of social networking.) However, information distillers in the future will be able to provide personalized advice either from paid human experts or from next generation expert mining tools.

As electronic commerce becomes more pervasive, and the speed and specialization of business increases, proxies or electronic avatars will become more prevalent. Functionally, such an avatar will not be much different from today’s travel agency role when booking travel for a client. However, whereas today’s agencies do not truly represent the interest of the traveler (agencies, in principle, represent the interest of the supplier), future avatars will act as your proxies—your electronic “mini-mes”—working automatically under business and engagement rules that you’ll define in order to be presented with the best deal.

As artificial intelligence becomes mainstream, and as technical standards facilitating electronic brokering are implemented, these avatars will become virtual software entities capable of representing you, the consumer. Eventually, avatars will completely broker and execute the best possible arrangements for you.

This type of avatar is already a reality in the hectic world of electronic trading, where complex software algorithms make nanosecond level decisions on whether to buy or sell stock assets. From this world, we should be forewarned that, as proxies become more commonplace, we must be prepared to face the consequences of relying too heavily on software avatars endowed with automated decision making permits. On September 7, 2008, in an already volatile and jittery financial setting, a Florida newspaper accidentally entered an old web article detailing United Airline’s 2002 bankruptcy. Google, all-obligingly, indexed the article and distributed it to e-mail subscribers who had requested alerts on any news regarding this airline. This is where automated software proxies took over. The stock trading software scanned the article and found the keywords “bankruptcy” and “United Airlines” and automatically ordered sales of UAL’s stock portfolio. Other software robots, responsible for monitoring unusually large trade volumes in the stock market, quickly took notice of the sudden sale of UAL stock and proceeded to sell their stock. The outcome was a selling frenzy that resulted in more than one billion dollar loss to UAL stockholders. The Securities and Exchange Commission began an investigation to determine responsibility. After all, who is at fault? The Florida newspaper? Google? The developers of the software? The companies that transact stock in such a perilous manner?

Clearly, we are entering a brave new world that requires added protocols to safeguard software agents going rogue and to answer the myriad concerns related to protection of privacy; not to mention the expected security issues related to fraud and software impersonators, with the logical progression to identity theft. In the meantime, if you are in the supplier’s side, you can start designing your systems to enable this future “Electronic Mini-Me” concept. Define and be prepared to have the appropriate services and architecture layers that can leverage the deployment automated selling brokers.

As you define this architecture, you will have to rely heavily on the implementation of publish/subscribe systems and asynchronous response patterns. You will also need to focus on implementing a sophisticated combination of Business Rules and Business Process Management based systems that can allow your business team to easily configure and define the automated way broker services will be made available to your customers. For example, these brokers could be configurable to making distressed inventories available electronically and able to dynamically price on-line offers with available inventory via dynamic revenue management rules as applicable. Think of how an electronic auction process in eBay.com works but on steroids.

Just as the electronic avatars discussed here are a practical instantiation of the move towards cyber-understanding, future systems applying basic rules-of-wisdom will emerge. True, Wisdom will always be a purview of humans and not computers. However, following the precepts of “Wisdom of the Masses”, we are now experiencing the benefits of the wisdom provided by virtual communities; areas where we find reviews in a broad range of areas, “How-To” tips, and better deals. A case can be made that this wisdom is an emergent property, resulting from the aggregation of large catalogues and information, and the associated tie-in of user areas and access to content. These areas are best represented by “Virtual Malls” such as Yahoo.com, Overstock.com and Amazon.com, but are also expected to rapidly merge with social networking sites in the so-called Web 2.0 world.

There is already a linkage between merchandiser sites and places such as Linkedin.com, MySpace.com, and facebook.com. This integration will ultimately occur via business partnerships or mergers, but it will be initially accelerated by automation known as Collective Intelligence, the process that combines the behavior, preferences or ideas of a group of people or sites to gain new insights[1].

Analogously, it is to be expected that, as this vertical industry matures, we will continue to see the emergence of portals specialized according to industry. That is, we will see “electronic virtual malls” integrating offerings on the one hand, and acting as “aggregators” dealing with the specific industry groupings. The aggregators will be able to convert the volumes of data found on the Internet into useful information. This information will be presented in a form which will be customized for information seekers as a consolidated package of knowledge. The automated assembly of related knowledge designed to fulfill the "seeker's" goals can be related to the area of specialization of the site. This trend will be evident first in consumer-facing verticals such as travel sites Expedia.com, TripAdvisor.com, Travelocity.com and various other special-domain sites such as MusiciansFriend.com and WebMD.com. The question you’ll need to answer is how to make your company part of this new world?



[1] Programming Collective Intelligence—Toby Segaran

Friday, November 5, 2010

Data, Taxonomies, and the Road to Wisdom Revisited

While early computing was referred to as “Data Processing”, the term “Information Systems” became prevalent with the increased sophistication of functionality. This makes sense. After all, there has always been a platonic goal to have computers process information just as humans do, except much, much faster. As originally framed, this goal was known as AI (Artificial Intelligence) and, despite some early successes with heuristic algorithms and neural networks, AI research eventually reached major roadblocks. Ultimately, AI’s most touted commercial achievement was the codification of narrow domains of expertise under the guise of “Expert Systems”. Expert Systems went through a hyped-up phase back in the Eighties only to fade away with the realization that the logic needed to replicate how humans process and organize knowledge is dependent on contextual, subjective, and often un-expressible decision-making rules. In other words, we humans process knowledge in a manner that is often inaccurate, biased or even intuitive. Still, the subjectivity of our knowledge has served us well along our evolutionary path and is more than enough to help us deal with quotidian existential needs, even if this knowledge is not always precise. (Who cares if a tiger wasn’t actually hidden in the brush? Your ancestor taking off upon the rustling of leaves was only being sensible!)
Recently, implementing “fuzzy” and flexible computer logic has yielded more effective AI applications, particularly to systems applying the Bayes Theorem, which relies on prior and conditional probability logic. Modern Machine Learning algorithms applying this and other algorithmic variations usually return reliable results to problems dealing with pattern and language recognition. However, given the probabilistic nature of the base algorithms, results are sometimes wrong. To err is not only human. Today’s computers can also err.
Given that the Bayes rule and other algorithms provide results that are not always correct, we may well conclude that there is a universal law stating that intelligence implies fallibility. If we are ever going to rely on these systems for life-and-death scenarios, we will need to incorporate some form of control feedback in the way they reach their results. Perhaps in humans, “Wisdom” is that control.
But how then do we attain Wisdom?
By now, you may have noticed that I am using the term “Information” in its most generic sense. Information can often be “misinformation”. Yet, misinformation and even lack of information are also forms of information (a dog that failed to bark was the clue that helped Sherlock Holmes solve a crime). When implementing information systems it does help to classify the type of information we are dealing with. There is raw information, and there is wisdom-based information. This progression to wisdom involves a series of steps that must be methodically climbed: Data, Content, Knowledge, Understanding and ultimately Wisdom.
  1. Data is primarily raw figures and “facts”; by nature it is voluminous and difficult to deal with and so is best stored and communicated in a mechanical way. Data can be wrong. The old GIGO adage (Garbage In/Garbage Out) captures what ought to be the highest priority in the automation of data: ensuring that the data inputted into the system is correct. Do not become confused by the term “Big Data”, by the way. Big Data actually refers to the Knowledge step on the ladder as it deals with the acquisition of knowledge via so-called Data Science analytics.
  2. Contentis data that has been collated, ordered and classified. That is, Content is Data plus its Taxonomy. Taxonomy is the categorization or classification of entities within a domain (the actual structure of the domain is defined by its Ontology). Consider the following taxonomies used to describe the animal kingdom.
    Linnaeus Taxonomy:
    • Kingdom: Animals, Plants, Single Cells, etc.
    • Phylum: For Animals: Chordatas, Nematoda (worms), etc.
    • Class: mammals, amphibians, aves. . .
    • . . . et cetera
    In "The Analytical Language of John Wilkins,“ Jorge Luis Borges, the famed Argentinean writer who belongs to the ontological set of writers who deserved to win the Nobel Prize but didn’t, describes 'a certain Chinese Encyclopedia,' the Celestial Emporium of Benevolent Knowledge, in which he lists this very unique taxonomy for animals, classifying them as follows:
    • those that belong to the Emperor
    • embalmed ones
    • those that are trained
    • suckling pigs
    • mermaids
    • fabulous ones
    • stray dogs
    • those included in the present classification
    • those that tremble as if they were mad
    • innumerable ones
    • those drawn with a very fine camelhair brush
    • those that have just broken a flower vase
    • those that from a long way off look like flies
    • others
  3. Knowledge is what is produced when the information is placed in context and the resultant significance of relationships within the data is realized. The addition of contextual information requires some element of human input; so the progression to this stage will most likely not be possible through the use of computers alone.
    To see the difference between Content and Knowledge, I suggest you try this exercise: Go to google.com and enter “IBM Apple”. You will get content listing all the sites in which IBM and Apple are discussed. Now, go to wolframalpha.com and enter, “IBM Apple”. You will get a digested and structured response comparing these two companies. The former is content; the latter is beginning to look a lot like knowledge.
    Production and discovery of knowledge is at the core of many start-ups business plans today. The emerging field of Data Science is leveraging big data sets to mine data in ways that produce knowledge.
    Organizations, such as Gallup or Nate Silver’s FiveThirtyEight, exist to mine data and content and produce knowledge on a variety of topics. Voting trends, consumer preferences, etc. are examples of mined knowledge. Business Intelligence, associated Data Mining technologies, and the more recent Internet-driven “Collective Intelligence” applications are examples of the more recent trends in the automation of knowledge acquisition. We are in the midst of moving from the Age of Content to the Age of Knowledge.
  4. Understanding is interpreting the significance of relationships between two or more sets of knowledge and deriving prime causes and effects from these relationships. While Gallup may unearth the knowledge that 33% of voters are likely to vote for a particular candidate, understanding why they lean that way is something that information systems can only hint at. Understanding remains an endeavor only humans are adept at. No matter what you may hear from the “hypesters” (not to be confused with the hipsters!), understanding cannot yet be performed by computers. As much as it might appear to be the case, the Siri and Google Talk systems lack an understanding of your commands.
    “Understanding” is how consultants and advisors make a living. Companies such as Gartner or writers of popular science and “How To” books are in the business of providing distilled understanding. Of course, if you happen to watch regular Sunday morning political discussion programs showcasing pundits and politicians in topical debates, you know that the “understanding” you get from these guests often can be biased and even wrong. Enter wisdom . . .
  5. Wisdom is the ability to choose between correct and faulty understanding. This is the famous feedback loop I referred to at the beginning of this article. The fact is, understanding can be the result of wrongly extracted knowledge, which may come from bad source data (outright misinformation), or content improperly formed with inappropriate taxonomies. For example, the taxonomy that classifies human beings according to race or some other categorization of “otherness” often leads to xenophobia, homophobia or racism.
    Wisdom represents the highest level of value in the information progression. Wisdom is not always objective or static. It can be subjective, and it is certainly dependent on the cultural environment or transitory circumstances. This is why it is unlikely that we will ever be able to codify “hard-coded” wisdom within computers, and why the belief that these future computers may act as judges in the affairs of men is dubious at best.
    Wisdom can be applied toward either material or spiritual benefits. Yes, Wisdom can be applied for profit and business advantage. However, just because something is applied with wisdom, does not dictate whether it is right or wrong. Beyond wisdom we enter into the realm of morality and philosophy. Even this last point is open to debate. Some have an “understanding” that moral-relativism is wrong, but some of us don’t think so.
    But I digress. . .
    Whether future software will be capable of Understanding (much less Wisdom), is open to debate. There is much we still do not know about how we humans think and about the nature of our cognitive processes. Humans mastered flying only after they stopped trying to replicate the way birds take to the air. Avionics accomplishes flying even better than birds by leveraging the underlying laws of nature; something birds do, only differently. This is why I believe that multi-million dollar projects such as the European “Human Brain” project[1] and the American-sponsored Brain Activity Map Project (BRAIN), that try to map the neurons in the human brain not unlike the way the human genome was successfully sequenced, have the markings of being fools’ errands. Recycling an old saying: “It’s the software stupid”. If the much predicted Singularity is to happen, it will probably require computer systems that “think” very differently to the way we humans do. And that “thinking” will be software based. Even then, I cannot conceive of truly automated wisdom (aka “Strong AI”) without first solving the question of what is “consciousness”. We are a couple of Einsteins away from figuring that one out.
    But I digress again . . .Whether strong AI is feasible or whether the Singularity will occur are problems best left for the next generation. As you stand securely atop the Content stage, remember that nothing is stopping you from moving up the next step on the road to wisdom: the Knowledge stage. Time to dive more into that Data Science stuff!

[1] See this link for a status on this project: http://www.bbc.com/news/science-environment-28193790

Friday, October 22, 2010

On Software as a Service

True, the traditional view of software commercialization may go the way of the slide rule and the typewriter, but there will always be a need for the services that software provides. However, the ability to access software services depends heavily upon the enabling of shared infrastructure from companies providing hosting, data storage and networking and telecommunication services. This infrastructure should continue to move towards standardization to facilitate the kind of “plug-and-play” flexibility the market demands. The ongoing standardization of emerging “middleware” technologies supporting distribution and access of services via service interfaces will have an impact comparable to that of the world-wide-web.

Software as a Service (SaaS) is exploding nowadays. Google’s application suite is an instance of SaaS providing generic horizontal services. Function specific products such as GoTo Meeting and WebEx for meetings along with Sales Force Automation, a more focused horizontal SaaS tool have been gaining significant market share over traditional competitors. This explosion also includes vertical industry applications. Thousands of hotels use TravelClick for reservations; the health care industry has hundreds of SaaS applications for patient management, ambulance services, etc. Plus remember, ultimately, Facebook and Twitter are nothing more than social media SaaS environments.

Despite all of this, SaaS is not a panacea—at least not yet. The model has to mature and as a result, the range of options, costs and enrollment mechanisms is still too varied and complex. Most significantly, SaaS systems need to find the right balance between functionality and flexibility; plus the model presents a list of new security considerations. Are you comfortable having your company’s most sensitive data out there, somewhere in a cloud?

Take heart though, standardization breeds commoditization, and a result of standardization is that in the future there will be a consolidation of service models and expected features. This consolidation is also being facilitated by the emergence of the “Cloud Computing” model that essentially makes the infrastructure services supporting SaaS invisible to the user. Large vendors are already introducing sophisticated virtualization, security, and management tools that will enable SaaS providers to offer an expanded range of configuration and portioning models to their clients.

But SaaS does not necessarily need to be wholly based on a centralized service delivery. The paradigm also applies to distributed services such as those provided to smart-phones. Already the paradigm for the booming smart-phone market is that of downloadable “Apps” with modules providing functions. Some Apps run entirely standalone, but others provide a front-end that can access powerful backend systems. Google is making available a suite of shopping Apps that instantaneously leverage the powerful Google server environment displaying reviews, alternative prices and so on. The popular Shazam is a complex application that tracks and recognizes tunes being played, and there are a myriad of widgets for all kinds of things. The user, particularly the younger user, no longer views these Apps as software. The kid downloading a ring tone is not buying software or data but a experience. The fact is that many of the Apps providers are now moving away from straight purchase models and toward service subscription or ad revenue models.

Now, so far I have discussed that the SaaS paradigm appears to be a consumer of services. The question is how will your company fit the upcoming Infosphere economy and how will this impact your very own IT strategy. What kind of SaaS is your company planning to offer, if any? When you envision the IT system of the future, you need to ascertain how you are going to play in this brave new world, as a provider of software services, a user or both. This includes defining the manner in which you will make your IT services accessible to users. When doing this, you will be glad you followed a comprehensive SOA strategy as the baseline for the IT transformation.

In a way, SOA is a necessary (tough not sufficient) element for the habilitation of SaaS. SOA systems intrinsically create services that can be selectively commercialized under SaaS. The SOA services become SaaS services. In other words, the concept of Software as a Service will evolve into the more prosaic “Service as a Service.” This statement seems obvious, but it has deeper implications . . . a complex SOA system may well consist of an interplay of components. For example, Provider #1 of service S1 may access a second service, S2, from provider #2r #2, who may depend on service, S3, from Provider #3, and so on. The user needs only sees the integrated service provided by provider #1 and can be oblivious of the value chain behind the original service request. In essence, SOA enables the replication of the way traditional value chains operate, except now we are using digital means. Just like real markets, SOA systems can become incredibly complex. Their support has to be structured in such a way to allow quick resolution of issues presented by complex, intertwined value chains. There has to be clear accountability lines.

Having said this, I am doubtful that mission-critical IT systems should ever rely entirely on external SaaS services. I firmly believe that technology; some technology anyway, will always be a weapon to attain commercial advantage and to enhance one’s competitiveness. Don’t buy into the idea that all software will become so commoditized that it will be something you can always provision externally—a simple utility provided by SaaS. General purpose business software such as ERP systems? Use them as a commodity. These systems do what they do. Being able to process payroll or accounts receivable internally is not going to give your company a competitive advantage (I am sure though there are exceptions to this!) But there will always be that little extra function, that cost cutting algorithm or automated innovative process, that will not be available externally, either because it represents a core intellectual property asset of the company or because the cost or risk of placing it in a external environment is not acceptable.

The question then is “What services should you endeavor to create rather than purchase?” The answer to this question depends on an analysis of what are you trying to get out the service: Data? Content? Wisdom? More on this next…

Friday, October 8, 2010

The Emerging Business Models in Information Technology

Until recently, the traditional IT revenue model landscape was a rather trivial one. You had your vendors—the companies that developed software or hardware products for use by other companies— and then you had your clients who consumed those products through straightforward purchasing or licensing along with yearly recurrent maintenance payments. On the side you had consulting companies that served as honest brokers that helped to define high level strategies. Add to this the providers of ancillary services, and you end up with most of the IT world of yesteryear.

This simple scenario is no more.

Emerging IT technologies and solutions are now being offered under a cornucopia of models; many of which are only now beginning to be understood. Beyond the “pay-if-you-can” models spawned by the availability of Freeware, Shareware and Open Systems, the future will see the delivery of software under a variety of revenue models, including Software as a Service, Software as a Function, and ultimately the probable disappearance of software as a standalone product. Software-under-the-Hood represents a mindset shift wherein consumers are no longer buying software but rather the things that software can do. Companies providing these services will use a variety of revenue models: free plus maintenance, one-off purchasing, subscription, advertisement, charge per utilization, on demand among others.

Google, for instance, makes the bulk of its revenue from advertisement; not from selling search software. Likewise, eBay’s revenue model is based on its auction facilitation and commissions. Facebook’s revenue model has flipped the world from what was originally the customer (i.e. Facebook friends) into the actual product sold to advertisers (i.e. you, my friend, are the product!). The generalization of SOA and the emergence of more sophisticated technologies will facilitate the drive to offer services rather than software. After all, subscribing to WebEx may give you the chance to download a client-side software module, but what you are ultimately paying for is the ability to schedule meetings on demand.

Mix this recipe: pour a liter of globalized Internet seasoned with Cloud Computing; add a cup of SOA facilitated Software as a Service and a couple of spoonfuls of Business Process Outsourcing, heat with the mobility technologies and spice with the growing success of social networking as the new killer-app. What you’ll gave is a dish representing the transformative emergence of new players providing yet unheard of business services. Already, it is difficult to categorize Facebook or Google under traditional definitions. In the future, the roles played by Microsoft or IBM will still exist, but even traditional software companies realize the need to reinvent their product and business model if they are to better compete under a continually changing landscape. The future will also see the disappearance of some of the typical roles in the value-chain (witness the demise of brick-and-mortar electronic companies such as Circuit City or CompuUSA), and more importantly, the emergence of newer models, redefined to be better fit the changes in information economy. This type of change can only be ignored at the risk of the company’s survival. If you doubt this, recall Wang Laboratories and its Word Processing flagship product as it faced the PC revolution, Polaroid as it confronted the digital photography revolution or Blockbuster in the process of being busted by Netflix (pun intended!).

Just as earlier software models were based on the “a computer on every desk” idea, or the importance of search, or some other insightful tenet, the next Bill Gates, Larry Page or Mark Zuckerberg will most likely be a child of what has been referred to as “The Infosphere[1]”. The Infosphere is the paradigm that all informational elements will be accessible from the electromagnetic digital media around us. You can think of the Infosphere as 3G or WiFi coverage on steroids: ubiquitous, always available, and transparent. It will be the natural result of the pervasive advent of cloud computing and the continued decoupling from specific access devices[2].

Recall some of my earlier observations about how technology usually “evolves” from hype to invisibility as it becomes pervasive. Unlike Wired Magazine’s recent claim that the Web is dead (at least from the perspective of the Web Browser as a universal client) I believe that the Web is very much alive. It’s just that it is evolving into invisibility.

While the Web Browser is now embedded in the hidden fabric of technology, the delivery of new applications and content for new mobile devices on a demand basis, anytime, anywhere, is also becoming an assumed capability. There is an umbilical cord being formed between most of the world and the emerging Infosphere.

Already the rapid adoption of technologies such as Apple’s iPhone and other Smartphones can be seen as earlier examples of this Infosphere. Mobile devices are today’s equivalents to the PC’s of yore, computers that you can carry with you at all times—prosthesis for the brain. Using these devices to interact with the Infosphere from anywhere, at any time, is not a longer a technology question but a commercial one. If only phone carriers did the smart thing and lowered those outrageous data roaming charges!

Here we return from the digression of the topic. The key now is for someone to figure out the right revenue models to apply in the future Infosphere. Data roaming has got to go, ads on Smartphones might be fine but I doubt the revenues they generate will help pay for the totality of the mobile services. Subscription or membership fees to social communities may emerge, who knows… In the end, much will depend on what will become the killer apps and services in the next few years. Figuring that out is the key.

How to do this? Remember the suggestion I made about how best to predict the future of technology? The secret is to find the synergy. That is, to visualize the usually unforeseen ways parallel advances will combine to form a new game changing event.

Find the synergy, especially as it relates to the impact the future may have either on your business or your IT strategy, and you will be on the road to defining your follow-up transformation strategy. If you agree that we are in the midst of an accelerated transition to an Infosphere paradigm, then it makes sense to try to imagine what the likely future business opportunities of such transition will be.

More on this next time . .



[1] Even though the term “Infosphere” has been around for a while (according to Wikipedia, since the sixties), it should be noted that IBM has recently created an Infosphere brand for one of their Information Management software products.

[2] A more esoteric term “Noosphere” has been used to describe a future global sphere of shared human thought—a sort of collective consciousness of human beings. I suppose some nice essays could be written on how the evolution and use of the Infosphere could be the technological enabler for a future Noosphere!

Friday, September 24, 2010

IT Transformation Lifecycle. The Wrap Up.


Perhaps you’ve noticed but up till now my series of blogs have followed a generic IT Transformation life cycle that roughly has these steps:
  1. Identifying the business needs
  2. Defining the drivers for Transformation and developing the business case
  3. Evaluating the future
  4. Understanding the scope and requirements
  5. Defining the technology strategy
  6. Making the Business Case
  7. Applying SOA as a Solution Architecture
  8. Defining the Services Taxonomy and the SOA Framework
  9. Applying the right SOA approaches & techniques
  10. Engineering the solution
  11. Establishing the right Governance and team
  12. Executing on the project via appropriate Project Management techniques
  13. Migrating to the new system
The diagram below summarizes this life cycle in terms of the purpose for each step:
We have now reached the end of the cycle. Upon successful migration you and your team are entitled to celebrate and reward the key performers. There is a lot to be happy and grateful for. However, you’d be wise to keep your cell phone active even as you celebrate. Initially at least, there is the likelihood that you will be receiving a large number of calls regarding deployment problems. Truth is that it will take some time before the system becomes truly stable. Indeed, it is a well know fact that all systems follow the so-called “bathtub shape” when it comes to failure rates:


A new system starts with a high failure rate that will (hopefully) diminish in time until it eventually becomes stable. True, failures will never go away during the system’s productive years (the “Rubber Ducky years”, I call them), but the failure rate should remain reasonably low and under control. However, after cumulative changes and stresses of continued improvements are applied throughout the years, you will notice that, with the weight of age, natural entropy eventually takes its toll making the system more and more unstable. Problems will arise; changes will become more difficult, meaning that, in time, the system will become sufficiently stiffened and inflexible and be ripe for yet another IT transformation!

That’s right folks.  From the moment you cut that ribbon debuting your new system, the system becomes “legacy”.  So, what was really accomplished after all the effort and the millions in investment to create a new solution? Well, if you did things more in the right way than in the wrong way during this transformation, the “length” of the bathtub will hopefully extend for many more years than it would have otherwise. That’s right you’ll move from a bathtub shape to a swimming pool. Also, the functionality of the new system will have been much improved. All in all, a good IT Transformation effort should be something to delight in. Like a good bath!
A new cycle does not imply repetition or more of the same. IT Transformation is about progress. The true shape of progress resembles the picture below, an upward cyclic progress towards new solutions.



Mirroring this “spiral of progress”, my blog is also resetting somewhat.  It has now reached a plateau. My future blogs will continue to cover the general theme of IT Transformation.  Getting set to walk up another flight of stairs, I will cover general aspects related to emerging business needs.  After all, we are now facing the Social Media explosion and the heightened pervasiveness of mobile computing. Who can say what the true impact of the advent of iPad and soon to follow iPad-like devices will be?  I intend to continue discussing current drivers for transformation and technologies that will shape the future.
Rather than making a weekly appearance, I will endeavor to publish an article every other week, time and cycle of life permitting. 

Till then . . .


Friday, September 10, 2010

Evaluating the SOA System Migration Alternatives


Even with traditional systems designs you have an array of options as to the best way to approach your system’s migration. The ultimate migration strategy will be driven by the specific business requirements and characteristics of your system. For example, you can do a so-called Big-Bang approach or you can migrate on a functionality basis. SOA gives you an additional option of migrating on a layer-by-layer basis (Presentation, Process and Data layers)[1].
A system wide Big-Bang migration should be avoided at all costs, but if you are able to segment the target audience so that you can deploy the system with a series of coverage based “mini-big-bangs”, you’ll have a great option. For example, if you can “big bang” a particular region or country to the new system (the smaller, the better, as you are using them as guinea pigs), and then gradually add new regions as you grow in confidence, this can be a winning strategy.
Unfortunately, this type of coverage-based migration is not typically possible in transformed systems. After all, IT Transformation programs tend to be enterprise-wide and holistic in nature. In these cases, you need to check whether is possible to graduate the migration on a function by function basis.
In functional-based migrations you gradually introduce specific parts of the presentation, process, and data layers for each functional subset identified. An example of a functional based migration would be one in which you first introduce the new CRM system, then add additional functional blocks, gradually sun-setting the legacy environment.
In order to identify if functional-based migrations are feasible, I suggest preparing a dependency map listing the specific subsystems that support each autonomous function. This dependency map will identify the services and subsystems in each SOA layer that can be deployed as standalone deliverables.
As you may imagine, functional-based migrations will demand integration of the legacy to the new function—at least for the duration of the migration. The good news is that you can leverage SOA’s ability to integrate legacy systems to new systems via the service encapsulation (i.e. service wrappers) of old functions. The bad news is that this type of integration can quickly become so onerous or complex that functional based migration becomes unmanageable.
Another approach is taking advantage of the fact that well-designed SOA systems facilitate layer-by-layer migrations. This additional level of granularity is another tool in your arsenal whereby segments of the new functionality are introduced gradually via stepwise introduction of the new SOA layers. The question then is which layers to introduce first and how. In this case you will be faced with an array of choices. What you choose to do, again, will depend upon the specific characteristics of your project. Following are sample pros and cons of each approach:
Migrating the Interface first.
This implies mounting the new GUI and interfaces against the legacy environment.
Pros
o Technically this is a low risk approach, provided the new interfaces easily replicate the functionality of the legacy interfaces.
o If the new GUI is friendlier and provides some GUI-specific functionality enhancements, you will be able to benefit from this early on.
o It can be helpful in expediting the training of users on the new system interfaces.
Cons
o The business team could be disappointed and confused by this approach as they will naturally expect improved functionality that might not be yet available due to the continued use of the old backend systems during the migration.
o You will need to train the staff prior to completing migration. Depending on needed future refactoring for processing and data layers, this training could become outdated.
o You will need to develop an encapsulated SOA layer so that the new GUI can access the legacy. This is essentially throw-away work.

Combined GUI and Processing Layers First
While theoretically you could first introduce just the processing layer, setting aside the legacy and data layers till a later time, this approach is usually not feasible. You would need to decouple your legacy presentation logic from business logic, and chances are that this would not be trivial. Accessing legacy data from a new processing layer is perhaps easier, but the new functionality will then be severely constrained by the legacy data schemas. A more practical approach is to move the processing and the GUI layers as a unit. You will then have to take into account these considerations:
Pros
o You can introduce enhanced functionality that does not depend on the data layer. For example, new validations or user workflows.
o Since data migration is usually the trickiest thing attempted in a migration, by first migrating the combined GUI and processing layers, you will be able to replicate existing functionality with minimum risk.
o You will more easily be able to fallback the system, if needed.

Cons
o The business areas will expect more functionality than you can provide. e.g. “Where is the internationalization?” This might not be feasible as long as you continue using old DBMS schemas!
Processing and Data Layers First
This is equivalent to a city building infrastructure (water mains, electrical wiring, sewage) but doing so in a way that the population remains unaware of the changes below ground.
Pros
o You can do the heavy lifting without troubling the users.
o Easy to fallback migration as you are making the change in a transparent fashion to the user
Cons
o You will need to emulate the legacy interface processes and, depending on the legacy system, this might not be doable. Note that I am not recommending modifying the legacy presentation layer to talk to the new system! This would entail too much throw away work.
o Lots of work that nobody will notice at first. Management will be asking, “Where is the beef?”

Data Layer First
Frankly, the idea that your legacy processes will be modifiable to use new data schemas is not a practical one. At best, you can expect to do a pure data replication and transformation as an initial stage in what will become a Processing plus Data migration. Migrating only data is not a step I would recommend.
In any case, any data migration effort should endeavor to create a Y-split that will ensure that legacy and the new data can coexist at least for the duration of the migration. When migrating processes plus data, you should ideally create process switches that will allow Y processing (parallel processing) of transactions between legacy and the new system. It would be a trial run for the new system with the plan to eventually refresh the new system and start from scratch.
Under any scenario, make sure to use appropriate monitoring and control switches as well as planning for fallback options. Place the appropriate switches to turn the new elements on and off as needed, and include the necessary logging to ensure you know what’s going on.
The best migrations are those that get you to that brave new world in one piece, with satisfied business users, and with your sanity intact!


[1] See my previous blog on the SOA Distributed Processing Pattern at: http://www.soa-transform.com/2009/08/soa-distributed-processing-pattern_20.html)

Evaluating the SOA System Migration Alternatives


Even with traditional systems designs you have an array of options as to the best way to approach your system’s migration. The ultimate migration strategy will be driven by the specific business requirements and characteristics of your system.  For example, you can do a so-called Big-Bang approach or you can migrate on a functionality basis.  SOA gives you an additional option of migrating on  a layer-by-layer basis (Presentation, Process and Data layers)[1].
A system wide Big-Bang migration should be avoided at all costs, but if you are able to segment the target audience so that you can deploy the system with a series of coverage based “mini-big-bangs”, you’ll have a great option.  For example, if you can “big bang” a particular region or country to the new system (the smaller, the better, as you are using them as guinea pigs), and then gradually add new regions as you grow in confidence, this can be a winning strategy.
Unfortunately, this type of coverage-based migration is not typically possible in transformed systems. After all, IT Transformation programs tend to be enterprise-wide and holistic in nature. In these cases, you need to check whether is possible to graduate the migration on a function by function basis.
In functional-based migrations you gradually introduce specific parts of the presentation, process, and data layers for each functional subset identified. An example of a functional based migration would be one in which you first introduce the new CRM system, then add additional functional blocks, gradually sun-setting the legacy environment.
In order to identify if functional-based migrations are feasible, I suggest preparing a dependency map listing the specific subsystems that support each autonomous function. This dependency map will identify the services and subsystems in each SOA layer that can be deployed as standalone deliverables.
As you may imagine, functional-based migrations will demand integration of the legacy to the new function—at least for the duration of the migration. The good news is that you can leverage SOA’s ability to integrate legacy systems to new systems via the service encapsulation (i.e. service wrappers) of old functions. The bad news is that this type of integration can quickly become so onerous or complex that functional based migration becomes unmanageable.
Another approach is taking advantage of the fact that well-designed SOA systems facilitate layer-by-layer migrations.  This additional level of granularity is another tool in your arsenal whereby segments of the new functionality are introduced gradually via stepwise introduction of the new SOA layers.  The question then is which layers to introduce first and how. In this case you will be faced with an array of choices. What you choose to do, again, will depend upon the specific characteristics of your project.  Following are sample pros and cons of each approach:
Migrating the Interface first.
This implies mounting the new GUI and interfaces against the legacy environment.
Pros
o   Technically this is a low risk approach, provided the new interfaces easily replicate the functionality of the legacy interfaces.
o   If the new GUI is friendlier and provides some GUI-specific functionality enhancements, you will be able to benefit from this early on.
o   It can be helpful in expediting the training of users on the new system interfaces.
Cons
o   The business team could be disappointed and confused by this approach as they will naturally expect improved functionality that might not be yet available due to the continued use of the old backend systems during the migration.
o   You will need to train the staff prior to completing migration. Depending on needed future refactoring for processing and data layers, this training could become outdated.
o   You will need to develop an encapsulated SOA layer so that the new GUI can access the legacy. This is essentially throw-away work.

Combined GUI and Processing Layers First
While theoretically you could first introduce just the processing layer, setting aside the legacy and data layers till a later time, this approach is usually not feasible. You would need to decouple your legacy presentation logic from business logic, and chances are that this would not be trivial. Accessing legacy data from a new processing layer is perhaps easier, but the new functionality will then be severely constrained by the legacy data schemas. A more practical approach is to move the processing and the GUI layers as a unit. You will then have to take into account these considerations:
Pros
o   You can introduce enhanced functionality that does not depend on the data layer. For example, new validations or user workflows.
o   Since data migration is usually the trickiest thing attempted in a migration, by first migrating the combined GUI and processing layers, you will be able to replicate existing functionality with minimum risk.
o   You will more easily be able to fallback the system, if needed.

Cons
o   The business areas will expect more functionality than you can provide. e.g. “Where is the internationalization?” This might not be feasible as long as you continue using old DBMS schemas!
 
Processing and Data Layers First
This is equivalent to a city building infrastructure (water mains, electrical wiring, sewage) but doing so in a way that the population remains unaware of the changes below ground.
Pros
o   You can do the heavy lifting without troubling the users.
o   Easy to fallback migration as you are making the change in a transparent fashion to the user
Cons
o   You will need to emulate the legacy interface processes and, depending on the legacy system, this might not be doable. Note that I am not recommending modifying the legacy presentation layer to talk to the new system! This would entail too much throw away work.
o   Lots of work that nobody will notice at first. Management will be asking, “Where is the beef?”

Data Layer First
Frankly, the idea that your legacy processes will be modifiable to use new data schemas is not a practical one. At best, you can expect to do a pure data replication and transformation as an initial stage in what will become a Processing plus Data migration.  Migrating only data is not a step I would recommend.
In any case, any data migration effort should endeavor to create a Y-split that will ensure that legacy and the new data can coexist at least for the duration of the migration.  When migrating processes plus data, you should ideally create process switches that will allow Y processing (parallel processing) of transactions between legacy and the new system. It would be a trial run for the new system with the plan to eventually refresh the new system and start from scratch.
Under any scenario, make sure to use appropriate monitoring and control switches as well as planning for fallback options. Place the appropriate switches to turn the new elements on and off as needed, and include the necessary logging to ensure you know what’s going on.
The best migrations are those that get you to that brave new world in one piece, with satisfied business users, and with your sanity intact!


[1] See my previous blog on the SOA Distributed Processing Pattern at: http://www.soa-transform.com/2009/08/soa-distributed-processing-pattern_20.html)