Network visualization, automation: At odds?

Visualization is a powerful tool. But in the context of network management, the prevalence of graphs may indicate that your network management system (NMS) is not ready to meet the requirements of autonomic network performance. Why so? Loki Jorgenson explains in this tip.

We all like pretty graphs -- they give us visibility into worlds that are otherwise hidden. But in view of network management and the future of network automation, graphs are a sign of immaturity and stubbornness.

Visualization is a powerful tool. Of that there is little doubt, given the impact that visualization has on innumerable fields of human endeavor. Two of the most auspicious and impressive industry conferences around, ACM SIGGRAPH and IEEE Visualization, are overflowing with creative technologies and glimpses into the highly graphic future. And no one would deny that the video gaming industry is huge and economically significant.

But in the context of network management, the prevalence of graphs says only one thing -- the network management system (NMS) is stuck in the 20th century and not ready to meet the requirements of self-healing, autonomic, auto-adaptive, application-oriented network performance. Why so?

Graphs -- and all those awe-inspiring visualization techniques -- are intended for human consumption. When a network management system throws up a graph or a chart, it is for humans to look at, analyze, interpret and act upon. And in today''s network industry, that makes a lot of sense -- humans are indispensable. In fact, the ratio of IT staff spending to IT capital expenditures is said to be 18:1. That''s a lot of people. So, naturally, there are lots of graphs around, enabling people to see into the state of networks, helping them build their intuition about the network''s behaviors, and supporting them in finding solutions to problems that the graphs illustrate.

But humans are part of the bottleneck

With rapidly increasing network complexity, the demands of converged applications, the blurring of the boundaries between the various functionalities represented by the OSI model, and the critical dependence of business processes on networks, it is very difficult for humans to keep up. Worse, the pool of IT skills is dwindling relative to demand, especially in North America.

To put things in perspective, consider that networks are just maturing to the point where they are susceptible to automation. Until recently, it has been typical for humans to populate all levels of the network ecology. If you think about it, networks are really cybernetic entities. End users incidentally act as application performance monitors, the alarms are their phone calls to the support centers, and the support staff act as diagnostic systems, requesting specific information from the end user and correlating it to determine what action should be taken to resolve problems. When networks are finally determined to be at fault, the support people escalate trouble tickets to the network engineers who, as management mechanisms, remediate or provision as required.

As many businesses can attest, this is a very costly way to run a network. People are an expensive resource -- and rightly so -- and when they are put to work in such an inefficient manner, their productivity suffers. And so does business.

A review of the Gartner IT Maturity model reveals that, in that framework, networks are not particularly far ahead. And for good reasons, as the history of IP networks reveals. However, recent advances are bringing networks ever closer to the breakout potential of automation.

Automation, though, requires the specifics of network behavior and problems to be discovered and identified -- and parameters for actions to be determined. This requirement points at sophisticated network analysis and intelligence that can resolve those details such that acts of remediation or provisioning can be automatically initiated. To date, those details are hard, if not impossible, to generate. And the interfaces needed to invoke machine actions (like the IETF NetConf standard) are not widely available.

Graphs, therefore, are often the sign that little or no machine intelligence is present.

That is, the data is being mechanically reduced and presented to the human with the intention that a smart engineer may be able to deduce what is going on and take action. A substantial network intelligence should be able to tell the engineer what, where, when and how to fix -- without needing to make up a lot of pretty pictures.

As a wise man once said, "An educated person can give directions without resorting to the use of hands."

And so it must be true of autonomic systems that they simply will not respond to pictures -- but will need details, probably expressed in XML, that will lead to actions. Certainly, there will always be a case for the use of graphs and other tools of visualization, beyond whatever "intelligent" system might be in operation. It is a given that those instances will still be strictly for human consumption. They should be restricted to cases in which human supervisors need to verify the actions taken by intelligent network systems, when unexpected situations arise outside the competence of the expert system, or when one human needs to communicate or illustrate the nature of a situation to another.

Is your network ready for the autonomic future? Only if your NMS can retain its value beyond the graphs it makes.

Notice: No graphs or diagrams were used in the publication of this column.

References:
SIGGRAPH
Visualization
Ghost in the Machine (18:1)
NetConf reference

About the author: Chief Scientist for Apparent Networks, Loki Jorgenson, PhD, has been active in computation, physics and mathematics, scientific visualization, and simulation for over 18 years. Trained in computational physics at Queen''s and McGill universities, he has published in areas as diverse as philosophy, graphics, educational technologies, statistical mechanics, logic and number theory. Also, he acts as Adjunct Professor of Mathematics at Simon Fraser University where he co-founded the Center for Experimental and Constructive Mathematics (CECM). He has headed research in numerous academic projects from high-performance computing to digital publishing, working closely with private sector partners and government. At Apparent Networks Inc., Jorgenson leads network research in high performance, wireless, VoIP and other application performance, typically through practical collaboration with academic organizations and other thought leaders such as BCnet, Texas A&M, CANARIE, and Internet2.

Read more on Network monitoring and analysis