Moving to a single voice and data network can save money, but it means putting all your eggs in one basket - so checking reliability of service is crucial
With voice and data combined on one network, and the prospect of lower IT and communications costs and greater business efficiencies, it is easy to see why converged internet protocol (IP) networks seem such an attractive proposition.
But moving to a converged network also has a downside. If you get hit with a network slowdown, unplanned downtime, or a major virus or worm attack, for instance, all the applications on the converged network could be affected. By putting all your eggs in one basket, you multiply the risks.
Will Cappelli, research vice-president at analyst firm Gartner, says, "There is a significant subjective risk in migrating to a converged network simply because we do not yet understand the impact of large levels of converged traffic on our corporate and public networks."
IP-based networks were designed for particular types of data traffic, and are now carrying other media such as voice and video, which behave in a different way to the data traffic created by traditional business applications.
So it is essential that IT directors are aware of quality of service issues and have a way to measure the quality of their converged networks.
Traditional voice networks have become very robust. Data networks, on the other hand, frequently face problems and require additional layers of technology to ensure they remain resilient.
In addition, organisations are increasingly running critical business applications over these data networks, relying on, for example, SAP, or Oracle or a commerce website. In essence, the whole company relies on the network.
The main problem with a converged voice/data network is latency - the time it takes for packets of data to arrive at their destination. Half a second may not matter to someone keying something onto a screen. But if you are using voice, that application soon becomes unusable.
Mark Blowers, senior analyst at Butler Group, recommends that IT directors establish some type of governing team for the network, comprising a person or group of people responsible for measuring quality of service, and putting in place policies and network service level agreements.
These policies must apply to internal IT staff as well as external service providers, and include clear lines of communication and responsibility.
"It is important to plan this up front, at the assessment phase. A lot of companies do not know what is on the network," he says.
Currently, the norm is for several IT units currently share the responsibility of measuring and monitoring service quality on converged networks.
These groups tend to be network-based IT staff, applications-based IT staff, and classical operations teams, who have a larger overview of the IT system.
"These groups will have to work together for the first time in history," says Cappelli. More often than not they have their own tools, languages and perspectives.
"Your network guys call voice over internet protocol (VoIP) an application, and your application guys say it is not an application. In some ways both are right, because it lives in the same infrastructure. But both are going to have to learn to cope with this. And network guys have to learn how to manage multiple applications over the network," says Cappelli.
One way to cross the divide is for IT staff to recognise that business applications and network applications are not separate, and that the IT department must manage the users' access to resources.
To facilitate this, organisations will need an end-to-end team that spans the whole IT system, says Cappelli.
In situations where the organisation has outsourced its network to a service provider, or where multiple suppliers are responsible for the communications, e-mail, and various business applications, service providers will have to work together.
However, there is no sign that this is happening yet, on any significant scale. Service providers like BT are moving into areas where they manage the converged network and offer to monitor certain applications, but few service providers will offer an IP-based service portfolio that ranges from voice to SAP, for example.
So for now it mainly falls to the IT director to ensure that his or her team can measure service quality on the converged network.
Fortunately, there are many tools available to do this. These include network-level data preferment tools from the likes of Cisco or Alcatel, which integrate with IP-based public branch exchanges (PBXs).
Networking equipment now tends to come with a technology/standard called quality of service (QoS), which helps to maintain service levels at the packet level.
Tools that give you visibility over the network are also available to enable the administrator to see how a particular router or server is performing, how the network is functioning overall, and where the bottlenecks are.
Tools from the likes of Network Appliance, Netscaler, Packeteer and NetIQ, meanwhile, can help identify network bottlenecks, and even warn of problems in advance, so you can be more proactive in the provision of bandwidth.
However, the biggest change over the past five years has been a shift away from tools that examine the events that take place in the infrastructure, or deep within the application code.
The administrator or application manager has traditionally used these tools to try to infer what the user is experiencing, but a new breed of tools attempts to capture directly the user's experience of factors such as network availability and response times.
These tools are available from major network management suppliers like Mercury Interactive, Compuware and IBM, but also from many smaller, innovative firms. There are about 100 available in all, either embedded into larger management portfolios, or as expensive standalone products.
"These have emerged as being really vital to manage overall quality," says Cappelli. But he adds, "They have their own drawbacks, because they focus on the high-level end and symptoms, and it can be difficult to read back to the root causes."
To plug this gap, another level of diagnostic tools has already grown, which attempt to work backwards from the symptom to the cause of the performance degradation.
In order to allocate and maintain bandwidth, IT directors or network managers can take steps to reconfigure or add routers, or purchase more bandwidth capacity from their service providers.
Many IT departments go down one of these routes, arguing that adding more hardware is cheaper than investing in technology to make network allocation more intelligent. However, they then face the twin problems of manageability and mounting manageability costs.
In addition, network service providers allow organisations to increase their bandwidth capacity, but make it hard to reduce it again if it is unused.
For now then, the network and application-centred tools seem the best way to maintain service quality on converged networks.
There are innovative new approaches to tackle network monitoring, such as the technologies developed by emerging companies such as Bristol Technology and OpTier. These allow users to track each and every data transaction as it makes its way through an infrastructure.
A modern application will interact with many different software components, and it is very difficult to see what is really slowing down a particular transaction. The new breed of technology will track each IT transaction, monitor its quality and dynamically allocate more resources to the important business transactions, in real-time, according to set policies.
Gartner sees such technologies as the way forward for measuring service levels on a converged network.
Network quality of service - don't believe the hype
Users should not believe the supplier hype about the network optimisation technology quality of service (QoS), says Forrester Research senior analyst Robert Whiteley.
QoS is designed to differentiate traffic by class of service, which the user can define - for SAP, VoIP or e-mail traffic, for example.
Its advocates promise QoS is standard, easy to set up, and part of a static configuration, says Whiteley.
However, according to Whiteley, the truth is that QoS is not necessarily standard across various suppliers' equipment, is hard to get right and requires consistent tuning. "For multi-service networks, QoS may be a necessary evil," says Whiteley.
"To combat the problem, limit it to only where it is needed, such as in the wide-area network, and use optimisation appliances like those from Juniper Networks, Orbital Data and Riverbed Technology to avoid unnecessary QoS requirements."
Too much granularity in class of service can make the network slow down or become hard to manage, as potentially thousands of network devices have to be uploaded with QoS profiles, says Whiteley.
Case study: VOIP rules the roost on lawyers network
One firm that has opted for a managed service to monitor its network traffic is Dickinson Wright, a US law firm.
It encountered problems with its VoIP phone system upgrade in 2004, experiencing frequent outages of long-distance and voicemail services, which cut vital lines of communication between lawyers and their clients.
Resolving the outages became a time-consuming, manual process for the IT department.
The law firm implemented Compuwares Vantage tool to manage application performance of the network from an end-user perspective. The tool also allows the company to prioritise its IT voice and data traffic.
Michael Kolb, CIO at Dickinson Wright, says the IT department has set VoIP calls as the highest priority, with internet traffic the lowest. Documents are a priority, because it can become critical for the lawyers to send these in a timely manner.
The majority of the data replication is done out of hours. Replication is the biggest network load. We replicate 200 databases on a daily basis all out of hours, says Kolb.
In addition, the firm uses Cisco IP-based tools to passively monitor the network to discover which users are accessing the network too heavily. This links into a Microsoft active directory to locate the bandwidth hogs by their IP address.
We can be on the phone to the user in two minutes, to tell them to kill their stream, says Kolb.