.shock - Fotolia
Enterprise networks are evolving. They have to deal with more data, more devices and different locations, all while providing a fast, reliable service that does not cost the business too much money. Networks these days are far more complex than in previous years.
The network has traditionally acted like the veins of an enterprise – delivering data and services across the organisation. However, technological advances such as mobile devices, cloud computing and more recently the internet of things (IoT) have changed the nature of enterprise networks. They now push beyond the traditional four walls of a business, into the cloud and out to mobile devices.
The demands placed on modern enterprise networks by the needs of the business mean that speed, reliability and uptime are more important than ever. The expansion of networks to mobile devices and applications running in the cloud, however, mean that it is also more difficult than ever; monitoring the network infrastructure for potential issues is harder to do in a third-party cloud environment.
“We’re seeing a lot of activity around extending network capabilities into the public cloud,” says Jim Duffy, senior analyst for the Networking Channel at 451 Research. “As enterprises use public cloud services they lose visibility into that infrastructure because it’s not theirs. But as they move workloads into the public cloud they’re demanding visibility into how those workloads are performing and behaving.”
It’s perfectly possible, for example, for employees to do a day’s work without touching their organisation’s network. Workers can use their own devices to access business applications that are hosted in the cloud. While that is fantastic from a productivity point of view, enabling employees to work wherever and whenever they want, it can present IT with some issues if something goes wrong.
When workers experience a problem, they turn to IT to fix it. Again, traditionally, that would not have caused much of a problem as everything used by the business was under its control. These days, that’s not always the case.
Michael Allen, vice-president of Emea at Dynatrace, says that moving applications to the cloud is changing the path between users and where those applications they’re connecting to reside. “That means a large chunk of application traffic not hitting the corporate infrastructure environment. If something goes wrong, workers don’t contact the SaaS [software as a service] providers directly, they hold internal IT responsible yet IT’s tools may not have any visibility into that environment.”
Machine learning and AI deliver network insight
Greater visibility across the network infrastructure has never been more vital. Suppliers are increasingly looking to new ways of gathering more information, and one method currently gaining a lot of traction in the industry is machine learning (ML) and artificial intelligence (AI).
Duffy explains that vendors are looking to add ML capabilities to take advantage of the increased amount of data being created by devices and the network infrastructure itself.
“The driver for this is more intelligence within the infrastructure itself and adding that intelligence and automation to tools,” he says. “Such proactive activity in detecting anomalies on the network happens faster and more intelligently, with minimal manual intervention. ML is a way of capturing the most useful data as quickly as possible, so anomalies can be detected and rectified in the shortest time possible.”
While this element of network monitoring and management is still fairly nascent many suppliers are beginning to look into adding ML and AI capabilities into their products. At the moment the focus seems to be on automating the low-hanging fruit, such as the ability to automatically add more bandwidth if the situation requires it, says Paul Griffiths, technical director for the Advanced Technology Group at Riverbed.
“ML is part of the reactive piece, taking a more hands-off approach. If something goes wrong in the infrastructure and it’s an application that needs more bandwidth, then the network infrastructure should be able to supply that, either by bursting the bandwidth requirements or shutting down non-essential services.”
Leon Adato, “head geek” at SolarWinds, says the interest in machine learning is being driven by more advanced tools that can gather greater insight from the available data. “There’s a lot of data, but not a lot of information and even less insight,” he says. “The tools have got better over the last five to seven years at turning that raw data into information. Human intervention will always be needed, but some aspects can and should be automated. Why involve a human if a tool can fix the issue?”
The DDoS problem
Security is another area of the network monitoring and management industry that is undergoing a big shift. DDoS attacks, which are getting larger and more frequent, are posing a greater threat to enterprise networks. Many of these attacks are being enabled by dumb IoT devices.
The DDoS attack aimed at DNS provider Dyn in October 2016 took down many of the world’s biggest websites, including Amazon Web Services, Box, GitHub, Reddit and Twitter. The attack was carried out using a botnet army made up of unsecured IoT devices, such as webcams.
Rupert Collier, Paessler’s senior channel manager for the UK and Ireland, says attacks like this will “make a few people wake up and smell the coffee regarding IoT and the security risks associated with it.”
In terms of what this means for network monitoring and management, while most tools are not strictly security devices they can prove useful as an alert system. “These tools can have a security benefit in that they can alert IT to unusual traffic. That may not mean something is wrong or down, but it can just say to IT that something is out of sync. If your IPTV cameras is sending out traffic at 1am to a server in Eastern Europe, that may be something you want to know about.”
SolarWinds’ Adato added that DDoS attacks using dumb IoT devices presents network managers with two different issues. “With Dyn, customers couldn’t protect Dyn or defend against that DDoS but they sure suffered the results. With automated attacks, business should be concerned about them from two directions: they don’t want to be attacked, and nor do they want to be the attacker by being part of a botnet army.”
Read more about network monitoring
- Most UK businesses have little visibility or control over their DNS servers and services even though they are a key component of businesses’ infrastructure and security profile.
- Container shipping company Maersk Line signs a five-year deal with Riverbed to monitor business-critical apps and services and troubleshoot network performance bottlenecks.
- The 2017 Computer Weekly/TechTarget IT Priorities survey shows growing interest in network privacy, security and management, but SDN and NFV are still lagging.
One potential fix for network managers is segmentation, says Adato. He says that separating the IoT network from the rest can protect the wider network infrastructure, as well as provide greater visibility into what’s going on.
“If you segment a network so that BYOD [bring your own device] and IoT are totally segregated you can monitor the touch points, the entry and exit points and look for normal and abnormal types of traffic. You can look for certain patterns; there’s no reason why your webcam system should ever have a single packet going off to – for example – China if you don’t do any business there,” he says.
As business continue to evolve and add more mobile devices and cloud-based services to their arsenal, network monitoring becomes even more crucial to ensuring uptime and reliability. Modern enterprises cannot afford network downtime, and detecting anomalies before they cause any problems is vital.
Jim Duffy of 451 Research concludes: “CIOs and network managers should make sure they have adequate visibility into their network, whether it’s on-premise or off, whether it’s private or public cloud or a combination of both. They should instrument their network at critical points where all of this data is concentrated, and strive to gain as much visibility at these points as they can for uptime, network and application performance, as well as security.”