VMware VSAN 6.5 supports containers and physical servers via iSCSI
VMware Virtual SAN now supports containers with persistent storage and allows physical server storage access to its shared pool of capacity via the iSCSI SAN protocol
VMware has announced upgrades to its Virtual SAN (VSAN) functionality in the vSphere 6.5 virtualisation environment, with enhancements that include support for containers, physical servers via internet small computer system interface (iSCSI) and the ability to direct connect two nodes without the need for a storage switch.
VSAN is storage functionality in vSphere that allows pooling of storage for virtual machines across up to 64 nodes. It provides shared storage across these nodes with data distributed between them.
Support for containers in VSAN is primarily aimed at test and development use cases, said VMware storage products vice-president Lee Caswell.
“Containers are aimed at agile development, and at the same time with VSAN,” said Caswell. It is something IT generalists can get to grips with, and they are being strongly adopted in test and development environments.”
Containers can run on physical server operating systems, and can operate without many of the storage overheads associated with virtual machines.
For this reason, VMware developed Project Photon, a host for containers that runs a lighter Linux-based operating system on a vSphere hypervisor.
Containers, as originally conceived, did not use persistent storage, for example that which lasted longer than the ephemeral app to which it was associated. Customers are, however, increasingly using containers for longer-lasting use cases, so persistent storage is required.
Read more on VSAN
- When buzzwords collide: Flash storage is making its mark on hyper-convergence, and VMware VSAN customers are giving all-flash VSAN clusters a try.
- Grass Roots was struggling with capacity and performance on its NetApp SAN, so it deployed VMware VSAN on servers with HGST PCIe server flash and helium HDDs.
Meanwhile, VSAN 6.5 now allows physical server access via iSCSI. Previously, only virtual servers in a cluster could access storage on a VSAN via a VMware proprietary protocol.
This move reflects the reality that despite the prevalence of virtualisation in recent years, many apps are still not virtualised. It enables VMware customers to use VSAN storage for such workloads.
Despite a theoretical upper limit of up to 6,400 VMs on 64 vSphere/VSAN nodes, the new iSCSI capability is intended for smaller use cases. “If you want to run a lot of physical servers, a SAN is still a good option,” said Caswell.
Finally, VSAN 6.5 now also supports direct connectivity between two ESXi server nodes with VSAN at remote locations, without the need for a storage switch between them.
This, said Caswell, lowers the cost for businesses that want to deploy vSphere/SAN as a datacentre-in-a-box at remote sites. A “witness” server, either onsite or elsewhere, ensures consistency between the two nodes if one should go down, and avoids “split brain” issues.