Andrea Danti - Fotolia

SAP Hana storage 101

If you're starting down the road towards SAP Hana in-memory analytics you'll need to understand its storage requirements. Here are some basics to get you started

As the cost of memory has fallen, interest has grown in applications that don’t regularly swap data out to disk in the traditional method for conserving system RAM.

Instead, popular big data applications such as SAP Hana work entirely or primarily in-memory, using servers with many gigabytes or even multiple terabytes of system RAM.

Working in-memory is an extremely powerful approach for big data work, especially for applications such as real-time analytics, an area where SAP Hana has gained growing acceptance.

However, while Hana runs in memory, it still has to write to persistent storage – which means disk, flash and eventually, yes, maybe tape too – to protect its work. Persistent storage is what provides the essential database transaction guarantees known as Acid – atomicity, consistency, isolation and durability. 

SAP Hana can be installed as a turnkey appliance or in a form that SAP calls TDI (tailored datacentre integration), with servers running on enterprise storage. For the latter case, SAP offers a hardware configuration check tool to make sure your storage meets Hana’s performance and functional requirements.

Single host or cluster?

A scale-up installation on a single host is simplest from the perspective of storage allocation and directory layout. Alternatively, the software can be installed as a distributed scale-out cluster, either on physical servers or VMs.

A distributed cluster is potentially more powerful and useful (although a single host can also scale up), but makes storage management rather more complex. For either installation type, the storage requirement is further complicated by the need to add data protection.

As standard, SAP Hana uses storage for several purposes. The main ones are as follows:

  • The operating system boot image – Hana runs on Linux.
  • Hana installation – its run-time binaries, scripts, configuration files, etc. This is also where trace files and profiles are normally stored. In distributed systems, every host has its own copy of these, plus scale-out configuration and trace files.
  • Persistence data – each Hana service or process ensures persistent in-memory data by writing changed data to free storage in the form of savepoint blocks. By default this happens every five minutes.
  • Re-do log – each Hana service also records each transaction to its own log file to ensure that the database can be recovered without data loss.
  • Backup – backups are written on a regular schedule.

Shared storage or server-side?

The decision of which type of storage to use for the persistence is complicated by two factors.

First, the savepoint blocks and re-do log files can use significantly different block sizes – up to 16MB (or even 64MB for super blocks) for the former, and up to 1MB for the latter.

Second, both are latency-sensitive and have different access patterns, with data access being mainly random while log access is sequential. Both are write-heavy, except during restarts, backups, reloads and so on.

Distributed clusters

All that means you need fast and flexible storage, so server-side flash (PCIe SSD) is often advised. Be warned, though, that server-side storage in a distributed cluster can affect your ability to do automatic load balancing and failover, especially in a virtualised environment – unless you take other steps such as also virtualising and pooling the server-side storage. 

Alternatively, since this is logically a shared-nothing approach, with each Hana service managing its own persistence data, you can use a shared array or subsystem.

Scale-out clusters are both share-nothing and see-everything: each node has exclusive access to its own data and log persistence volumes, but all the other nodes still need to be able to see those volumes. This is because host auto-failover relies on moving the failed node’s persistence volumes to a standby node.

Sizing requirements

Memory sizing is a complicated matter, based on the space required for row and column data, objects dynamically created at run-time (such as temporary table duplication during delta merge operations), software code, caching and so on.

Persistence (savepoint blocks and re-do logs) sizing for each node can be calculated by analysing its tables using SQL or simply by taking the node’s RAM capacity, and then in each case adding 20% more for headroom.

More on SAP Hana

  • Whether you are opting for Hana on premises or in the cloud, database server main memory size will affect your pricing. Here are some basics to understand as your start your research.
  • SAP Hana cloud options vary, and can be confusing. Here are the differences between Hana Cloud Platform and Hana Enterprise Cloud and what they mean for SAP deployments.

You then need to add at least half as much again for backups, plus space for the other storage requirements mentioned above. It can be useful to direct all the backups from the nodes within a distributed system to the same storage device, but beware of routing a Hana node’s backups to the same array that its persistence data lives on.

Overall, if you total the memory capacity of all the hosts in a Hana cluster, then as a rule of thumb its storage subsystem should provide at least 2.5 to three times that amount of persistent capacity. If system or storage replication is added for disaster recovery, that doubles the storage needs – essentially, the same amount of storage must be provided on the secondary site. 

Latency and bandwidth

SAP Hana’s log l/O in particular is latency-sensitive, which limits the use of synchronous replication to distances and media that permit low latency connections. Hana’s application-based system replication is a more expensive alternative; a cheaper one is asynchronous storage replication – if you can live without instant recovery.

SAP also expects isolation of the Hana workload from other storage workloads. A TDI installation needs 400Mbps per Hana node, which with overheads is between 4Gbps and 5Gbps, both over the SAN backbone and at the array. That equates to approximately one 16Gbps Fibre Channel inter-switch link and array port per three nodes. While it is unlikely that all nodes will hit this peak at once, it is a requirement for Hana certification.

Each node also requires two Fibre Channel ports, preferably on separate HBAs to enable multipath load-balancing and redundancy/failover. And in a scale-out installation you will need to set up multiple zones, each in turn covering multiple networks.

The zones needed are client (for the application servers and users), internal (inter-node traffic and replication), storage (which also covers backup) and admin (including the boot and vMotion networks). For a TDI installation you will need to configure all four, while an SAP Hana appliance will come with internal and storage zones already configured.

And finally

One caveat with all the above is that SAP’s developers continually develop and enhance Hana. That means its capacity requirements for disk and memory can change from one version to the next, as well as varying from one implementation to another depending on the data and the workload. A project of any significant size will probably need professional sizing assistance, but the information above should arm you for commissioning the project and dealing with the professionals.

Read more on Flash storage and solid-state drives (SSDs)

CIO
Security
Networking
Data Center
Data Management
Close