Mopic - Fotolia
Scality has launched version 2.0 of its Artesca platform, with a big emphasis on the ransomware protection inherent to object storage and its availability as a (virtual) appliance.
Artesca is Scality’s object storage product aimed at single application use cases and is heavily targeted at data protection. Scality wants it to be seen as “lightweight” in comparison to the company’s Ring object storage, which is targeted at multiple application support and petabyte scale, although, according to CMO Paul Speciale, around 30% of customer deployments are as backup storage.
Artesca started life as a container-deployed product aimed at DevOps workflows, but that’s a thing of the past in practical terms. Now, the product’s Linux core is stripped to its secure minimum as part of its re-working for the age of ransomware.
The company claims to have developed “unbreakable, immutable object storage for backup”, said Speciale.
Speciale positioned such anti-ransomware functionality as like the inner layers of an onion, with human, network and application-level defences above it.
“Storage is part of an organisation’s mission-critical assets,” said Speciale. “Immutable is a must-have, and we need to be strong on defence in case other layers in the stack are broken.”
At the OS level Scality uses Linux, which Speciale describes as “hardened” via, for example, removing root access so that ransomware intruders cannot exploit it. Also, the potential number of Linux packages it is possible to install is much reduced to minimise patching and possible exploits.
Key aspects that are part of the mechanism of S3 and object storage leveraged by Scality in Artesca include:
- Object locking, with the ability for the customer to set locks for specific durations and to ensure data cannot be unlocked except via multi-factor authentication, for example.
- Closely allied to this is Scality’s Compliance Mode, which gives WORM-like functionality.
- Backups direct to object as primary storage, which is a feature of Scality’s partnership with Veeam and is part of the latter’s Data Platform v12. Previously backups that used object storage as a target had to stage through file-based protocols.
- Instant recovery, which has been enabled by overcoming some performance barriers in restore performance, but also, according to Speciale, a “philosophical mindset” that had seen object storage as unsuited for such functionality.
- Resistance to exfiltration of data as a result of the sharding of data into thousands of locations which intruders would find “indecipherable”, said Speciale.
Scality version 2.0 is available as a software application on approved hardware or to run as a VMware virtual machine.
Artesca had initially launched as a Kubernetes storage-focussed product. It runs in Kubernetes and was aimed at developers that wanted a cloud-native storage product. That’s been superceded now, said Speciale, adding: “The software is designed with Kubernetes machines running internally, but you get it in a Linux format, via a Tarball that you can’t take apart and run as containers.”
Object storage is a minority interest but steadily growing in popularity. Its I/O characteristics – cheap but slow – have tended to make it best suited to long-term retention, although high-speed flash is being deployed. However, it is mostly used as storage for secondary data, which ranges from backup, through general unstructured data, but also datastores for analytics workloads.
The recent TechTarget/Computer Weekly IT Priorities Survey for 2023 found that 5% of respondents planned to deploy object storage this year, compared to 16% SAN, 11% NAS and 10% hyper-converged infrastructure.
Read more about object storage
- Unified file and object storage: Three suppliers and their approaches. We look at unified file and object storage products from NetApp, Pure Storage and Scality, the differences, the workloads aimed at, and how unified they actually are.
- Is object storage good for databases? We look at object storage vs block access SAN storage and ask if object storage can be used for database workloads, or is it just good for bulk storage of analytics datasets?