SaaS series - Cohesity: A tour de force of cloud connectivity communiqués

This is a guest post for the Computer Weekly Developer Network written by Jean-Baptiste Grandvallet, EMEA SE manager for strategic accounts at Cohesity.

Grandvallet writes in full as follows…

Connecting Software as a Service (SaaS) entities typically involves integrating different SaaS applications to enable data flow and communication between them.

First, you need to determine which applications need to be connected to exchange data or trigger actions.

Then, the integration requirements for the access need to be evaluated.

To understand this process you need to define the specific integration requirements, such as data synchronisation, automated workflows, or real-time communication.

Then decide how you want to connect the SaaS entities. You can select an integration tool or platform that suits your needs.

Several options are available, including:

  • Integration Platforms as a Service (iPaaS): These platforms offer pre-built connectors and tools for creating integrations.
  • Custom Development: If your integration needs are complex, you may need to build custom integrations using APIs and code. A popular API is the RestAPI which conforms to the design principles of the REST, or REpresentational State Transfer architectural style.
  • Third-Party Integrations: Some SaaS applications offer integrations through marketplaces from popular Cloud Service Providers like AWS, Google or Microsoft or third-party connectors.

Also explore the API documentation in detail.

Many SaaS applications provide APIs that allow you to interact with their services programmatically. The most popular ones are the APIs for O365 from Microsoft or the APIs from AWS.

For these services users need to set up a SaaS Connector which works as a kind of proxy. The documentation will explain in detail, which information needs to be configured in the proxy to connect the SaaS. In most cases service accounts need to be implemented, credentials exchanged and network configurations defined.

The proxy itself helps with balancing the workload across different SaaS regions, authenticating the user, managing failover and encrypting the data.

Abilities & responsibilties

So then, who is responsible for connecting SaaS services?

This is again highly dependent on the size and structure of the organisation, as well as the complexity of the integration. It typically falls on a combination of IT professionals, developers, business owners, end users and sometimes third-party integration specialists. The specific roles and responsibilities can again vary from project to project and organisation types, as well as the complexity of the integration.

It’s important for all parties involved to work effectively to ensure the successful integration of SaaS services. Clear communication, well-defined roles and a thorough understanding of the organisation’s goals are crucial for a successful integration process.

We also need to ask, what elements of any given cloud stack need to be integrated externally, or indeed internally? An organisation will have to integrate various elements within a cloud stack to create a cohesive and functional cloud infrastructure. Again, the specific elements can vary depending on your organisation’s needs and the cloud services you use.

Certain internal services and tools should definitely be integrated, as they will allow for service level agreements in core disciplines like security and compliance. Making sure these services remain internally, gives organisations the tools to enforce the required level of security and controls across complex SaaS and multi-cloud environments.

The integration of IAM solutions is crucial to managing user access and permissions consistently across various cloud services and applications. Also, the monitoring and logging of different SaaS and multi-cloud environments should be done centrally within the organisation. This will enable customers to collect, analyse and visualise performance, costs and operational data from their different cloud services. The monitoring should also cover the implemented security controls and compliance. This will allow organisations to quickly detect cyber incidents and potential compliance issues across their cloud environment.

Quite often, companies will have to integrate data that is stored internally.

In these cases they will have to move data between different types of data storage systems, such as databases, data warehouses and data lakes. When transitioning to the cloud, data migration is a critical integration task to move existing data from on-premises or legacy systems to the cloud. And organisations should make sure that they clean the data or reduce the volume with key technologies like compression or deduplication before they migrate the data to the cloud.

When, not if

Another important question is when – in the cloud-native software application development lifecycle – should we connect SaaS services and the wider cloud estate?

The timing of this is crucial as it can impact your application’s functionality, performance, security and scalability. Independent of the different project stages it is important to involve all relevant stakeholders, including integration specialists. This will allow organisations to ensure that their SaaS service and cloud estate connections are well-implemented and meet the application’s and business requirements at the same time.

Early planning and testing are key to preventing integration-related issues in production and ensuring a successful cloud-native application deployment.

Why then does connecting the cloud give us a technology service that represents more than the sum of the parts within? When you do it right, connecting cloud services will give you more flexibility, scale, security and global availability for a lower price.

Cloud services per definition are designed to work seamlessly with each other, allowing you to combine various functionalities and data sources to create a more comprehensive solution. This integration can automate workflows, enable data sharing and facilitate communication between different services, enhancing efficiency and reducing manual processes. You can also tap into specialised services for tasks like machine learning, AI, analytics and database management, enhancing your application’s capabilities.

This feature-rich setup will be available worldwide for the employees at the organisation with low latency and high performance. Cloud services offer elastic scalability, which means you can easily adjust resources up or down to meet changing demands.

That setup will also allow companies to optimise costs by selecting services that are most cost-effective for specific tasks. You can also implement cost-saving measures such as serverless computing and on-demand resource allocation.

Finally, as business needs change, the connected cloud services can be adapted and extended to accommodate new requirements.

Breaking SaaS

So far, we’ve only talked about connecting SaaS, but when should we break SaaS connections – and how?

Breaking SaaS connections, or discontinuing integrations with specific SaaS services, can be necessary for various reasons, including changing business needs, cost considerations, security concerns, or the availability of better alternatives. Also questions around data sovereignty or compliance can force an organisation to reevaluate where intellectual property or personal data sets can be stored externally. But organisations should remember that when breaking SaaS connections, it’s essential to communicate clearly with all relevant stakeholders. They should also plan for data migration or backups and follow the appropriate procedures outlined by the SaaS providers to avoid disruptions and data loss. Additionally, organisations should also consider the potential impact on their business processes and user workflows and plan accordingly to minimise any negative consequences.

Overall the rollback needs to be planned and tested carefully because it will most likely affect business-critical applications and processes.

In cloud computing, physical data sovereignty, the placement of guardrails and port of entry checks depend on a variety of factors.

Data sovereignty

Another question then… where – in physical data sovereignty terms – can clouds be connected and where should borders, guardrails and port of entry checks be in place?

Data sovereignty regulations vary by country and region and they dictate where data can be stored and processed. In some cases, data must remain within specific geographical borders. Compliance with these regulations often requires implementing borders, guardrails and checks to ensure data remains within the legal boundaries.

Jean-Baptiste Grandvallet, EMEA SE manager for strategic accounts at Cohesity.

Also, the choice of cloud deployment model (public, private, hybrid, or multi-cloud) influences where data sovereignty measures are applied. Public clouds, for example, may have data centres in multiple regions and countries, making it critical to establish clear borders and checks to control data residency. Private and hybrid clouds may offer more control over data placement, but they still require attention to data sovereignty requirements.

Besides strong access control and identity management, organisations can use their network architecture to keep control of the data sovereignty. Organisations can establish network boundaries and routing rules to control data flow and keep it within specified borders.

Data itself should be ideally classified based on its sensitivity and regulatory requirements. Highly sensitive data may have stricter data sovereignty requirements. Implement guardrails and checks based on data classification to ensure appropriate controls are in place.

Encryption does help to protect data in transit and at rest. Encryption keys should be managed securely and access controls enforced. Port of entry checks should include verification of encryption mechanisms and keys.

Organisations should implement data backup and disaster recovery strategies that consider data sovereignty requirements. Data backups should adhere to the same regulations as primary data.

Finally, the contract and the cloud Service Level Agreements (SLAs) give insights into how the provider manages questions of data sovereignty. It is absolutely vital to review these parts on the contract.

Let’s also ask, what roles do automation and the wider world of RPA play in the cloud connection SaaS landscape?

Here is where the magic and the fastest innovation happens. Both automation and RPA are revolutionising the interaction between the services and the users, making it a more comfortable experience. The user, whether it is a business user or a member of the IT team receives stronger information and is enabled to make better decisions. Both automation and RPA in conjunction with Machine Learning and AI, could take over the important but repetitive and boring work and escalate events to the IT/Security teams when it becomes important, complicated and exciting.

Below are some examples:

  • Automate standard tasks: Every day, IT teams see hundreds of predefined tasks fail. An automation of this workflow can ensure that data and processes flow seamlessly between different services, enhancing productivity and reducing latency. If you are a backup administrator for example, your role is to examine, reschedule and restart any failed jobs. All these processes can be automated. Based on a threshold and/or the priority of the machine, or the data stored on it, you can define how often the restarted process is allowed to fail, only then alerting the administrator to decide whether to intervene manually.
  • Gather information: There can be thousands of reasons why systems or other tasks fail and AI can automatically identify and report the causes and also proactively recommend the next steps. AI can save hours and hours of time by automating this research. As confidence grows, it can also carry them out.
  • Prioritise workloads: Automation and RPA can identify the best time to schedule (or reschedule) backups and automate the process, based on different criteria. For example, if a copy of the backup of a workload is also written to tape at the end of each day, the AI knows that the fresh backup must be ready by a certain point. In this way, the AI helps to contain the risk of unexpected data loss.
  • Secure the crown jewels: Automation can quickly analyse large amounts of data and help IT and security teams to understand the content and therefore the value of the data. The AI can rank data and workloads according to their business value, giving weight to many downstream tasks – starting with event correlation, where events are prioritised based on the value of the data. Copies of the most important classified data can be automatically pushed into a virtual cyber vault, which is physically separated from the rest thanks to an airgap and holds an immutable copy of the data.
  • Proactive prevention: AI can also include status data of the machines and their condition. If the hardware wobbles because components have failed, AI can proactively redirect workloads to other systems or instruct the backup system to restore the affected machines to different hardware. This prevents data loss and failures.

Automation and RPA can dramatically reduce the massive toll on users, IT and security teams by doing many of the important but tedious tasks itself.

Providing comprehensive reporting and clear and concise next steps, giving a wood-from-the-trees perspective to operational groups that are undersized for the difficult jobs at hand.

 

 

 

CIO
Security
Networking
Data Center
Data Management
Close