enterprisesecuritymag

Why is Hyper-converged is the new Wave

By Michael Morey, AVP Infrastructure, National Life Group

Michael Morey, AVP Infrastructure, National Life Group

It amazes me that the hyper-converged market did not get the attention it deserved until the last several years. Once seen as niche solution providers, the hyper-converged market players are being acquired, showing up in other solution offering like backup recovery, and are simply stealing market share from the more traditional compute and storage venders.

"home the concept of creating a “Hyper Team”"

There are several options when it comes to delivering compute and storage in a datacenter environment. You have the traditional pizza box 1u/2u servers, server blades and chassis, storage arrays, and storage fabric switches. You can clear consolidate storage and networking switch in many cases, but the added complexity of converged network does not aid in trouble shooting efforts. Numerous vendors pre-package all the above and deliver what is commonly referred to as converged infrastructure. There has always been value in this approach but for me the price has not been attractive enough as you inevitably need technical skills to operate and trouble shoot these “engineered” environments not just a phone number to call.

Finally, we get to my point of view on Hyper-converged solutions that I view as commodity, shared storage that can truly be operated as plug n play capacity for both compute and storage in the datacenter.

Manufactures have approached delivering this technology from two different camps.

1. Camp 1 - legacy larger frame vendors like IBM, HP, and EMC who deliver traditional data center resources needing to transition to lower cost hyper-converged.

2. Camp 2 – core business is hyper-converged vendors like Nutanix, Simplivity that have started in the hyper space and been fight to gain market share from first camp 1.

I would have thought the legacy vendors would be able to invest in heavy R&D and deliver hyper-converged at an aggressive price point but this has not materialized as of yet. I assume they have to manage the transition away from their tried and true large frame solutions both from a manufacturing and profit loss perspective. When you are investing in what will surely become commodity products, taking money out of one pocket and putting smaller amounts in the other does not seem like good incentive. Sometime innovation can allude even the largest vendors so acquisition is the next best thing as is the case with HPE’s Simplivity purchase. HPE, from my perspective had everything they needed to engineer a solution but just could not get it done. Remember this is about commodity servers, shared storage and some secret software sauce that is not all that secret anymore.

I have babbled long enough about vendors so let’s talk about why now is the time for hyper-converged.

First and foremost, compute resources like CPU and ram have aligned with storage capacity of well above 10TB at the appliance level. Solid state disks add both performance but more importantly capacity to the mix so as you are not purchasing appliances just to fulfill your storage requirements. Getting compute and storage at optimal levels means best price point and not buying appliances for to fulfill storage or compute alone.

Second is the performance. Software-define storage “secret sauce” coupled with SSD and local server execution to that storage has outperformed some of our highest end legacy technologies. Couple this with offloaded NVidia processors within the same appliance and you have a screaming Virtual Desktop Infrastructure environment though with some caveats like limited auto failover for virtual machines in our case. It is not myth that this new approach can achieve 10-15 percent increase over legacy high end large frame solution.

Third is resiliency. In order to achieve very high nines in storage availability, you need to have a single device that delivers it and is typically expensive or you have several devices that you distribute storage across avoiding single device impacts on availability. I always discourage have just one of something “somewhat NASA like” as it leaves me feeling uncomfortable and with no options. The stripping of data across numerous appliances is a tried and true data protection method that all the vendors provide.

Finally, there are some operational considerations this technology brings to staffing models. In our case we are nearly 100 percent infrastructure outsourced and those pesky vendors still divide and conquer IT services in silos. Hypervisor teams, network teams, storage teams, backup teams, disaster recovery teams, windows server teams and so on. What do you do in this case? You force them to adapt and drop your service costs because this old approach is too expensive. Hyper-converged drives home the concept of creating a “Hyper Team.” Image the same technical staff being able to cover all the previous technology verticals. Not only is it achievable is somewhat mandatory.

Have no illusions that the appliance will need no care and feeding. Those simple patches, upgrades in capacity can still challenge you so good design practice should have you still maintaining several clusters of hyper-converged appliance so you can test upgrades through smaller less critical clusters then tackle your production environments.

In closing, there is no reason not to jump into hyper-converged at this point in time with only minimal things to watch out for like being ready for all those 10GB network ports, several IPv6 requirements, encryption at rest requirements that may or may not be available with your selected products. Be brave and start your journey by doing as many POC’s as time allows and then dive in. Whether utilizing for virtual desktops, development environments or production workloads, hyper-converged is no joke at this point.