profit_image -

PeerGFS to add AI/ML anomaly detection to distributed file system product

Peer will add AI/ML-based anomaly detection as it ramps up security protection in its PeerGFS distributed file management software, with Linux server support also to come in 2022

File management software maker Peer Software plans to launch Linux file server compatibility this summer, along with enhanced artificial intelligence/machine learning (AI/ML)-based file access anomaly detection and storage audits.

Peer’s PeerGFS file service allows multi-site file access and sharing with hub-spoke and peer-to-peer failover and replication between sites. The product started out nearly 30 years ago specialising in Windows DFS replication (distributed file system and DFS-R) beyond Windows storage.

It provides distributed storage for Windows servers for customers who want to use Windows file servers and third-party NAS storage. In so doing, it claims to be different to providers of similar file management software, such as Nasuni.

“Nasuni uses a proprietary namespace to overlay customer files,” said CEO Jimmy Tam. “We still base ourselves on Windows DFS, so there’s nothing in the data path to obfuscate things.”

Peer also plays in a similar space as the likes of Ctera and Panzura.

PeerGFS enables a distributed file system to be created across mixed storage systems that include Windows, NetApp Data ONTAP, DellEMC Isilon/VNX/Unity, Nutanix Files, S3, and Azure Blob, with asynchronous near real-time replication across the systems. Linux server support is set to be added this summer.

PeerGFS requires deployment of its Peer management centre, which is the brain that organises input/output (I/O) traffic flows, and can be run on a physical or virtual server.

Below that, there are Peer agents which are tailored to storage platform application programming interfaces (APIs) and which log file events such as access, changes, saves, and so on. The agents forward messaging – via the management centre – to other PeerGFS instances.

That event flow provides the core of the functionality, with customers able to set file sharing and replication between specified folders, with additional features such as size limits on folder, pinning of files, and so on.

A so-called “network of brokers” sets policies on file event flows so that, for example, data is replicated between specified folders or kept within set geographies.

Key use cases are global file-sharing and collaboration, with local caching possible. Deltas are picked up from file event streams and changes replicated between locations. That provides for rapid failover between sites should a server or storage go down, with RPOs claimed to be cut to “near zero”, according to Tam.

There is also automated failback as Peer’s software brings the primary system back up when it is ready, with Tam adding that failover functionality had proved very popular in virtual desktop infrastructure (VDI) deployments.

PeerGFS will also handle file sharing wherever an instance of Windows file system is running, including instances in the AWS, Azure and Google clouds, said Tam, although he was keen to point out that many customers don’t want to use the cloud for any or all their data.

“Lots of customers, such as those in the military or finance, don’t want to use the cloud for reasons of security,” said Tam. “We say that you don’t have to be cloud-first – you can be cloud-friendly or cloud-optional.”

Beyond file management services, Peer is trying to add value with auditing and behaviour monitoring, including via AI/ML. It already has malicious event detection (MED), which includes use of “bait files” that do not generate file events by default, but if they do – such as by ransomware crawling systems –  this will raise alerts.

Similarly, “trap folders” slow down ransomware crawling behaviour by use of recursive looping behaviour in programming. Meanwhile, patterns from known ransomware variants can be matched to event streams in file systems.

Planned additions to MED include collecting event stream data per user to develop a “whitelist” of expected behaviour against which anomalous activity can be identified, which could range from bad human actors to machine-generated actions.

“We’d generate a pattern of what ‘normal’ looks like and monitor for deviations,” added Tam.

Also coming in summer will be file access audits that will report in detail on user interaction with files, as well as trend alerting and the addition of network file system (NFS) to bring Linux compatibility.

Read more about file storage

Read more on Cloud storage

Data Center
Data Management