xuanhuongho - stock.adobe.com
Data is considered to be the lifeblood of most modern enterprises, and commodity trading firm, Gleadell Agriculture, is no exception.
The company, established in 1880, is an independent trader of grains, oilseeds and pulses in the UK and to overseas markets, and is reportedly responsible for purchasing 2.5 million tonnes of produce from UK farmers each year.
The trading floor is, therefore, where the bulk of the company’s business gets done, aided and abetted by huge quantities of transactional data that its 140-strong team need to analyse and act upon during the course of these trades on a minute-by-minute basis.
“Changes to data happen every minute, making it one of the business’s valuable resources,” Tristan Hargreaves, IT infrastructure and support manager at Gleadell Agriculture, tells Computer Weekly.
Ensuring there are no interruptions to the flow of this data over the course of a working day is essential for keeping Gleadell’s day-to-day operations ticking over.
“Everything else – be it buildings or even people – can be replaced, if necessary, but we simply cannot afford to lose key business information from a business continuity perspective,” says Hargreaves.
Interruption and disruption
With 140 employees working across five sites, internet connectivity has emerged as a source of service interruption for the company in the past, and prompted the firm to re-evaluate its business continuity and disaster recovery procedures.
This, in turn, led to the startling discovery that its existing data backup system and processes were putting the firm at heightened risk of suffering a business-crippling data loss incident that it would struggle to bounce back from.
“Our systems [at the time] were run on a traditional 24-hour backup cycle, meaning backups were only carried out once a day, potentially exposing us to up to 24 hours of data loss,” says Hargreaves.
“After discussing this with the business, we realised it would be unrecoverable for us to lose a full day’s worth of trading data, both from an operational and customer perspective. It was not a viable position to be in going forward.”
As to how the firm found itself in this situation in the first place, Hargreaves – who has been with the firm around a year and a half – says it appears to be a classic case of IT struggling to keep pace with the growth of the business.
“I don’t know too much about the history [of the IT estate] before I joined, but I think it is a traditional tale of the IT systems growing rapidly over a short period of time, along with the complexity of the business, and then not looking at how things could be done better,” he says.
To help it close the holes in its disaster recovery strategy, Gleadell enlisted the help of its IT partner Think S3 who decided the Zerto Virtual Replication (ZVR) software would be the best fit for its business requirements.
The company’s outgoing, snapshot-based replication system managed to achieve a recovery point objective of 14 minutes. The Zerto system put forward by Think S3, however, managed to do the same work in 14 seconds through its reliance on block-level replication.
“Over the years, I’ve seen so many problems caused by snapshots. Having a solution that doesn't use them means I can go to bed without worrying about whether things will still be there in the morning,” he says.
“Also, with our previous system, if there was a failure, restoring from backup was a long and lengthy process, and our staff would be sat there twiddling their thumbs while that happened.”
From on-premise to cloud
ZVR was initially deployed at Gleadell to replicate data stored within its on-premise environments at the start of 2017.
However, it was the knowledge that Zerto was soon to be introducing functionality to the product that would allow users to replicate Azure workloads within their on-premise environments for backup purposes that convinced Gleadell to use it.
At the time, the firm was eyeing up a move to the public cloud, which was set to start with the replacement of its on-premise CRM and ERP systems with a pair of bespoke applications, running in the Microsoft Azure public cloud.
Read more about disaster recovery and business continuity
- Zerto serves up smoother disaster recovery with its 'Balvenie' release. The software now has more granular analytics and reverse replication in the Azure cloud.
- Given the comprehensive nature of business continuity planning, mistakes can happen. Explore some of the most common errors to avoid before a disaster hits.
The apps are set to go live in Azure sometime in February 2018, which means a consistent and reliable internet connection will be even more important for the firm.
“The internet connectivity issue is not so much a problem today for trading because our servers are on-premise, but – in the long-term – as we move to cloud, always-on connectivity will become critical,” says Hargreaves.
“Because we can’t always guarantee we’re going to have that internet connectivity, we now know we can use Zerto to continually replicate our production infrastructure back to the on-premise environment for failover purposes.”
And with plans to move even more of its application estate off-premise, now the firm has adopted a cloud-first approach to sourcing IT resources, having this type of functionality at its disposal will only become more important to Gleadell as time goes on.
“Not everything is suitable to move to the cloud,” he says, “but we are looking to move as much of our production environment into Azure as possible.”
Avoiding cloud lock-in
And, should the firm ever decide to widen the pool of cloud providers it sources services from or decide to migrate out of Azure completely, Hargreaves says it will use Zerto to help shift its applications and workloads around.
“The main reason we use their technology was for disaster recovery and business continuity purposes, but it does have an added benefit of helping us to avoid vendor lock-in,” he says.
“For the foreseeable future, we’re planning to run our production environment out of Azure, but if Microsoft changed their pricing structure, we know we can use Zerto to move our data out to another provider, back on-premise or somewhere inbetween.”
For the moment, though, Hargreaves says one of the biggest benefits of all this work is that it has contributed to creating a much less stressful working environment for all concerned.
“I’m a much more relaxed IT manager now, because – if disaster does strike – I know we’re not going to incur a big data loss,” he says.
“We do failover testing everyday, and that gives you a massive confidence boost in your disaster recovery plan because you don’t want to find out something hasn’t worked when you’re in the middle of a disaster.”