Mining and construction company Thiess found bandwidth to be a bigger challenge than security when it rationalised its server systems. How did it go about creating workable data access policies?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Thiess' operations often involve large groups of sites working in remote locations. Before it began on a data centre rationalisation project, it often rolled out individual servers in each major site. That posed a major management challenge in terms of tracking, updating and securing data.
"We've got sites all around the country that are varied in size and nature and duration," Moran explained at a recent briefing hosted byWAN Optimisation vendor Blue Coat. "One of our constraints is that bandwidth is precious, so we've been using packet shapers for six or seven years to manage that bandwidth and have that visibility into what's going on in the WAN. It's been a maturity journey where we've worked with the business to craft the policies that we've applied." In some cases, 50 or more staff might need to work on a single 512K satellite link.
Solving that problem required a two-phase approach: managing the bandwidth on each site, and centralising all data into the data centre and a central group of key applications, rather than relying on individual site servers.
"The objective is around delivering the service, not just connectivity. When we provision, we request a certain size from carriers, but often they get back to us with 'you can have this', which we just have to make do with. We use the shapers to manage that precious bandwidth.
"We identify critical apps and they're prioritised and things that are not critical get a best effort type of policy. They may or may not get any bandwidth at all. There's applications we deliver out of the data centre which were never meant to be delivered across the WAN. The key thing we're trying to provide is consistency of service irrespective of what's going on across the network."
"There are two broad groups of applications: those that are allowed, those that are not. In the allowed pile, there's probably a dozen Citrix apps, maybe two dozen Web sites and web services which are allowed, there's voice obviously, Exchange, and that's about it."
That centralisation has also made it easier to develop security policies and data consistency. In terms of overall planning, security ranks third after bandwidth and network management, Moran said.
"One of the deliverables when we rolled out our ERP system [JD Edwards] was to try and get one source of the truth and to bring all these disparate sources of data into one place for a number of reasons, not the least of which is we can make sure the sites are being backed up and that we've got the data secured appropriately."
Prior to the centralisation, data was typically stored in different systems according to disciplines, with no easy way of collaboration. While the new centralised system makes collaboration easier, access decisions aren't solely the domain of IT.
"We had versioning issues, we had support issues with replication jobs," Moran said. "Now it's all available out of the core, the support and management of that is simplified. It's organised in a way that's parallel to their business rather than based on some arbitrary geographic boundary.
"We copied all the old data and made it a directory with their stuff [for each division[. We're not best paced to work out how their data should be laid out, so we're going to give you and you can decide what to do with it. It doesn't make sense for us to arbitrarily bundle that version here or there. That's a business piece about determining who has access to what data, and it's more consistent because it's being done with one source, rather than being done on each site."
Another big motivation was cost savings and governance. "What we found is that we had some really broken business processes because people can't connect to the data remotely," Moran said. "You had to go offsite to look at the data to work out what was going on, and that can take two days to get there on certain sites. We've got a lot of fly in-fly out jobs where the workforce is shipped in for a week then shipped out.
"We used to ship administrative staff in and out of the job sites since that's where the data was. Since we've implemented the proxy solution that means the data is in the core, so clerks work out of the Perth office regardless of which site they're working on. There's economies because we don't have to fly them to the site and give them camp accommodation and site allowances, and they're looking at the data in real time, not data that's a day old or more. They're having a conversation about real-time data. That's allowed them to change the way to do their business.
"We can get economies because we don't need to put data servers on site for two or three people," Moran said. "Some of our sites might only be around for two or three weeks. If we can centralise the data and make it so people can get to it from anywhere, then we've covered off our governance drivers."
Moran estimates that about 70% of available data has been centralised since the project began around three years ago, though some remote file servers remain. "What we're doing is going through a process of saying if this project is going to finish in the next year, maybe we won't touch that."
The constant presence of portable PCs (Thiess has more than 3,500 laptops in use) does introduce another security challenge. "The largest share of malware infections are walk-ups," Moran said.
There's one other issue which Thiess is unlikely to solve in the short term. Being responsible for IT that stops remote workers accessing, as Moran delicately puts it, "the full diversity of the Internet" whilst at an isolated mining site, doesn't always make you popular.