In-memory database technology gains ground, but challenges remain

In-memory database technology improves specific application performance notably. But challenges need to be met before in-memory database becomes the norm.

Every industry has its own set of milestones. These milestone events are considered more important than others due to their ability to significantly alter the course of the industry. It is said that the Little Black Dress created by designer Coco Chanel in the 1920s was one such revolutionary development that shaped the modern fashion industry in France into what it is today. And even as you read this piece, something similar could well be happening in data analytics.

The potential game changer here is the market’s growing interest in database querying using in-memory database technology. These in-memory database concepts are being driven by increasing capacity and decreasing cost of infrastructure (memory), amidst prevalent poor performance levels of enterprise applications due to the growing volumes of data that need to be analyzed and managed. However, it will be some time before in-memory database technology becomes widespread, as its growth is not free from obstacles.

Slow pace of adoption

One should note that although databases with in-memory database technology boast of improved performance, they are used only in important applications, and for specific reasons, since they are physically constrained by infrastructure limitations as of now.

“An in-memory database has made querying more affordable and viable. Adoption rates are increasing but because in-memory solutions are generally ’fast read / slow write’, they aren’t being adopted as rapidly as one might think,” says John Brand, a senior analyst at Forrester Research.

There has been greater adoption for specific applications, but as a generalized database platform there’s still a long way to go. This is because there is quite a bit of effort involved in loading and managing the in-memory tables within the database itself. Sometimes organizations believe that this is seamless and view the in-memory database only as add-on functionality. In some cases it is, but the more generalized the usage, the more likely it is to require additional effort in implementation, operations and maintenance.  

For large scale implementations where a huge investment has already been made in the application, putting in an equivalent amount in the infrastructure in-memory capabilities can give IT a “quick win” and help address business criticisms about poor application (and therefore perceived IT) performance. It should be noted though, that the quick win is also very quickly forgotten by the business. Moreover, simply upgrading aging hardware can have similar benefits in many cases. The combination of new hardware, new software and a move to in-memory database technology all at once obscures the real benefits of any one of them. Business users see a change in performance but it soon gets established as “the norm”.

Though in-memory database technology is gaining ground, there still is some way to go, according to Suneel Aradhye, CIO, Essar Steel. He explains, “We at all points need to verify manufacturing involvement and figure a fit case for HANA [SAP’s High-Performance Analytic Appliance]. SAP is also promoting HANA from the perspective of migrating resource intensive functions.” Having a product roadmap and a business roadmap becomes essential to exploit the power of in-memory database technologies. Essar is currently in talks with SAP to assess and implement its HANA architecture.

Market trends indicate that the in-memory database will increasingly be packaged as an option for major on-premise applications such as CRM, ERP, supply chain and even ECM, where performance related to these specific applications becomes problematic. It’s already been adopted heavily in financial services for real-time market data and complex analysis needs. “With the volume of transactions, a bank will turn to in-memory for analytical purposes. The data is also growing. High performance computers are always sought out irrespective of the organization. But, until a maturity level is reached, they will have to work with database appliances that may provide in-memory,” explains Harish Shetty, executive vice president for information technology at HDFC Bank.

Gains depend on multiple factors

The in-memory database definitely offers a substantial performance increase over disk-based systems. However, in-memory solutions are still ultimately constrained by disk transfer speeds and memory capacities. As a result, not all applications can be successfully run in-memory and not all will benefit equally.

The application-specific offerings by vendors will be of benefit obviously only to those applications that are already “pre-built and pre-tuned” for that environment. In such a scenario, organizations that use generalized storage/ database devices with in-memory capabilities will find the benefits of the technology much harder to achieve, and thus, justify. Also, because performance is often more likely to be tied directly to applications and the infrastructure architectures, some organizations will find little to zero benefit of in moving to an in-memory database. Another scenario is one in which the applications may already have been written to manage the data extremely efficiently and therefore the disk-based storage may not be a bottleneck for the application. It’s therefore important to understand the application architecture before deploying in-memory database solutions to determine whether or not the benefits are likely - and worthy - of being pursued.

Vendor strategies

“SAP, as just one example, had such a poor end-user perception about its application performance that, as a company, it had to push in-memory database solutions, or risk significant backlash and a potential market churn,” says Brand from Forrester. “When organizations had already spent tens of millions of dollars on software and hardware implementations, poor performance at the end-user level was becoming increasingly difficult to justify. In-memory technologies have helped SAP to quickly overcome these perceptions of poor performance at a much lower cost than having to re-factor the design of the entire application.”

Industry participants believe that while in-memory database is a viable option, larger players will soon seek to provide it as a SaaS option (in the form of in-memory services).

Other smaller players are using a different pitch with their in-memory goods. Andy Honess, vice president for global enterprise accounts at QlikTech, says, “Large companies take large checks to provide in-memory databases. They sell the whole package. We do an add-on and make it available to smaller organizations.”

Challenges

Still, there are some challenges that persist with the in-memory database. For the most part, the in-memory database acts as a disk cache by loading the database structure and data into memory rather than retrieving from disk. Therefore, the performance gains are all about faster reading of data. However, performance gains are still limited by the architecture of the application driving it; the structure and implementation of the database itself; the hardware the database is running on; and, the connectivity to external devices.

There is also the flash memory “in-memory” solution that doesn’t rely on any traditional disk-based media at all. It is evident that substantial improvement in performance can be gained by moving high frequency reads into memory, whichever style of approach is taken. “However, large volumes of data in lower frequency reads aren’t necessarily that much more efficient with in-memory systems. Similarly, highly complex multi-user access can have a minimal impact as well,” warns Brand.

Read more on Database management

CIO
Security
Networking
Data Center
Data Management
Close