Maxim_Kazmin - Fotolia
Enterprise storage performance is pretty much a black box, with most enterprise storage chiefs in the dark about the input/output (I/O) profiles of key applications as they affect their storage systems.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
This is according to a survey carried out for storage performance testing provider LoadDynamix, which questioned storage professionals in predominantly large enterprises.
About two-thirds (65%) of the 115 individuals surveyed said they can only assess I/O use on an ad-hoc basis and are reliant on their storage supplier to tell them, or simply have no access to this information.
Remarkably for such large enterprises with costly and sophisticated storage hardware deployed, only 35% of respondents said they measure and fully understand the I/O profiles of key applications.That’s despite obvious implications of poor storage performance.
For most of those surveyed (90%) a dip in performance in the storage infrastructure means decreased user satisfaction, while for more than half (54%) it means a loss in revenue. For 51% it means violation of service-level agreements (SLAs) and for 49% it results in downtime.
The top storage project for the coming year is implementing a new backup/disaster recovery system, which was indicated by 54% of those questioned.
Improving availability (51%) was also highlighted as priority, as well as evaluating new storage technologies such as object, cloud and software-defined storage (49%), improving storage performance with flash (45%), and reducing storage costs (37%).
Of those questioned, most have a Fibre Channel storage area network (SAN) storage environment, while 12% run network-attached storage (NAS) systems and 10% run iSCSI SAN. Only 4% run object storage environments.
When asked about key projects for the forthcoming year, the addition of all-flash arrays came top with 54% of respondents highlighting it as a priority, while public cloud for non-mission critical data was second with 28%.
Converged or hyper-converged storage was the next highest priority at 26%, while 23% said they planned a move to software-defined storage on standard x86 architectures.
More than a quarter said they would move to the cloud for non-mission critical applications, while 7% planned to do so for mission-critical environments.
Just under one-third (29%) manage between 1PB and 10PB, and 47% manage between 2PB and 10PB. Only 2% of those questioned manage less than 500TB. The supplier deployed by most was EMC (69%), with NetApp (52%), IBM (29%), HP (23%), Hitachi Data Systems (23%) and Dell (10%) following behind.