Computational Storage: Druva CTO - Models, tools & business rules

In a number of follow-up pieces to the Computer Weekly Developer Network (CWDN) series focusing on Computational Storage, CWDN continues the thread with some extended analysis in this space.

The below commentary features a Q&A with Stephen Manley in his role as CTO of Druva — a company known for its cloud-native data protection. The company offers a SaaS platform for data protection across datacentres, cloud applications and endpoints. 

The central technology proposition and promise from Druva is a chance to securely backup and recover data with the scale and simplicity of the public cloud – and so (because we’re talking public cloud) only pay for what you use.

Computational storage enables a software and hardware system to offload and alleviate constraints on existing compute, memory and storage — so given this truth, how should software engineers identify, target and architect specific elements of the total processing workload to reside closer to computational storage functions?

CWDN: How do developers know how, when and where to make the computational storage cut?

Manley: To make the computational storage cut, choose something standardised, so that you are not running complex code in an environment that is difficult to manage and secure. Keep it simple.

If you keep to standardised functionality, the amount of refactoring could be limited. In some ways, the shift to serverless and containerised functionality also sets design patterns that you can leverage. 

If you are using a legacy application, you should think of the computational storage as adding functionality. Otherwise, it will require a significant amount of effort to refactor the application. 

CWDN: If we agree that computational storage reduces Input/Output (I/O) bottlenecks and opens the door to greater data processing speed for certain applications, then is it only real-time apps and data services that really benefit, or are there other definite beneficiaries of this lack of latency?

Manley: The primary beneficiaries will be real-time and data-intensive applications. Otherwise, the value will not compensate for the additional application and management complexity. Of course, there is an extensive amount of real-time and data-intensive applications and it is one of the fastest growing areas in our industry.

While the edge certainly needs compute and I/O, the biggest edge challenge will likely be manageability at scale. Adding edge devices will be comparatively easier than managing data, software and security at scale. Computational storage, however, will simplify the deployment of additional devices because there are fewer components and by being able to build-in standard functions (e.g. compression), it will be easier for developers to build efficient solutions at the edge.

Two models make sense

There are two models of computational storage that make sense:

  1. Inside of servers/HCI appliances. In this case, the management constraints don’t change compared to standard HCI management.
  2. Inside of a cloud – e.g. AWS S3 Object Lambda. In this case, the management should be similar to managing Lambda functions. 

Druva’s Manley on CSD: A sanguine & sensible approach.

The payoff with computational storage will vary depending on the infrastructure. For example, as GPUs resolve their I/O limitations, the bottleneck will primarily be in latency from storage to compute. The benefit of computational storage will diminish with slower storage or faster networks. 

Analytics of streaming data (video and audio) will continue to increase in importance – for retail (tracking customers), law enforcement (tracking people), manufacturing (tracking production lines) and health care (monitoring patients under remote care). 

CWDN: So how should do organisations approaching their first use of computational storage know the difference and know-how and when they should be looking at either option? 

Manley: The rule of thumb we follow is:

  1. An organisation skilled in serverless and container programming should consider PCSS.
  2. An organisation that primarily uses traditional applications should start at FCSS and look at serverless and containers before considering PCSS.

Not to belabor the serverless analogy, but the only way the computational computing works at scale is if:

  1. The function itself is small, directed and efficient
  2. The function is standardised

If your code is making a large number of  ‘decisions’ it is unlikely to be successful, efficient, or maintainable. 

Computational storage will not be a resource that advertises itself for applications. It will be deliberately built into an architecture (e.g. edge computing). Advertising APIs could be decades away.

The computational storage community should not ignore what is happening in the cloud. The model of serverless computing and bringing it closer to object storage is a good model. Don’t start from first principles, but build on well-understood and established patterns and adoption will accelerate.

 

 

CIO
Security
Networking
Data Center
Data Management
Close