Open Seas: Common file synchronisation issues in today’s hybrid world

This is a guest post for the Computer Weekly Developer Network written by Jason Kent in his capacity as director at Open Seas, a UK-based enterprise IT solutions company specialising in data protection and file synchronisation.

Kent writes in full as follows…

Are your users reporting that their file edits are lost after another user elsewhere makes changes to their copy of the same file? Is the amount of users and data being processed on your network causing bandwidth issues? Are you not being alerted when things go wrong creating a mess with your file infrastructure? You are not on your own. These are some of the most common issues enterprise IT staff are facing with their company’s file synchronisation and replication systems.

As you may know, replication and synchronisation technologies are key components to network infrastructures, allowing IT staff to effectively protect, distribute and share information between remote offices and disaster recovery data centres.

For your basic needs, Microsoft’s Distributed File System Replication (DFS or DFS-R) is usually appropriate, but a high-volume environment can lead to problems.

For those that are not aware, DFS-R is the file replication engine within Windows Server… and it is a free utility included in your standard Windows Server operating system. 

It is designed to replicate data between DFS Namespaces (another utility provided by Microsoft that creates a virtual file system of folder shares). The DFS-R service provides basic replication functionality on your network. However, DFS-R can prove to be quite costly in setup time, management time and historically questionable reliability. Let’s unpack two of the main teething issues below.

Algorithm vs file locking

The ‘last writer’ wins the algorithm vs file-locking scenario.

DFS-R provides only one way of dealing with multiple updates. This occurs when two users update the same file on different physical servers before one of the copies can be synchronised. The result is that you have two different copies of the same file on two different machines. With DFS-R, the newer file will be replicated. 

Kent: Look for solutions that offer collaborative file sharing between offices and with a variety of one-way and multi-way rule methods.

This results in the changes made by the other user being lost. DFS-R does save a copy of the file locally on the machine where the conflict occurs and writes an event log error message, but an administrator must manually retrieve these files. Microsoft actually recommends to not use DFS-R in an environment where multiple users could update or modify the same files simultaneously on different servers.

For environments with multiple users scattered around different locations and servers, engineers need a solution – such as Software Pursuits’ SureSync – that minimises the ‘multiple updates’ issue. Usually, the best solution is when file locking is enabled i.e. when a user opens a file, all other copies are locked on the other devices that are part of the synchronisation network. This way when a different user tries to open the file you’re working in, they are given read-only access. Once the original user is finished with the file, saves it and closes it, it is synchronised to the other machines and then the lock on the file is released allowing the other users to obtain write access.

One method may not suit all needs for large enterprises.

That’s why it’s best to look for solutions that offer collaborative file sharing between offices and with a variety of one-way and multi-way rule methods. If you are looking to distribute files from a central source you do not want to use a multi-directional rule. A user could potentially modify a file on one of the destinations which would then be replicated back in a multi-directional synchronisation thereby changing the master files on the source and potentially causing damage to that master set of data.

Too many users?

Bandwidth throttling issues are real, so how do we overcome them?

Another common pain point is throttling. DFS-R can throttle bandwidth usage based on a per-connection basis with a fixed throttle. This means that if your bandwidth usage increases, DFS-R does not perform “bandwidth sensing” to adapt the throttling based on changing network conditions.

A Quality of Service (QoS) style throttle helps to avoid slowing your systems down for your users. Even better, a system with advanced, dynamic throttling is best for enterprise-sized systems. This way, the bandwidth usage is based on a percentage of bandwidth available. For example, you could use 50% of the connection – if the connection is 10Mbps, 50% of the idle connection would be approximately 5Mbps used. If another process consumed 5Mbps of that connection, the throttle would reduce to approximately 2.5Mbps (50% of the free 5Mbps). This allows your file synchronisation system to use more bandwidth when it is available and less when other processes need the bandwidth.

These are only two of the issues that we are experiencing with DFS-R. It also provides limited reporting options, limited ability to synchronise encrypted files or no ability to synchronise files stored on FAT or ReFS volumes.

All of these – and especially the two pain points detailed above – make it hard for systems to operate efficiently in today’s hybrid way of working as engineers need to adapt systems for users working from different locations but also manage varying bandwidth speeds at different times.   

 

CIO
Security
Networking
Data Center
Data Management
Close