Alan Browning outlines what hyper-converged infrastructure is, and how companies that have started the journey can proceed to the next step.

A subset of the complete software-defined data centre, hyper-converged infrastructure (HCI) is becoming more mainstream, with its major advantage being targeting low-hanging fruit within data centres.

It does this by optimising operation and availability through allowing turnkey appliances to be operational in a short span of time, while not requiring advanced technical skills that most customers don’t have access to.

However, as with any great technology, be sure to mind the speed bumps in your pursuit of implementing a software-defined data centre (SDDC).

The journey to a software-defined data centre starts now.

 

An overview of the current HCI landscape

In recent years, speaking to enterprise customers about implementing HCI has been likened to pushing water uphill. Thankfully, that mindset is changing as understanding increases and perceptions change.

I have often said that the biggest resistance to change is people. However, when a technology fundamentally changes business processes, simplifies infrastructures and gives IT the ability to become a profit centre, adoption of the technology occurs despite the naysayers. As a point of reference, think back to the resistance when we as IT professionals moved from physical servers to virtual servers – presently, virtual machines are the norm for 90 per cent of x86 deployments globally.

However, I believe:

  1. HCI is being sold as a rip-and-replace: Instead, it should be sold as a transitional technology.
  2. HCI is being sold as the answer to every IT challenge: It is important to remember that HCI is not a single product, but a reference architecture. The beauty of HCI is that it has an offering for every wallet size and suits most enterprise applications. Understand the customer requirements before leading with a particular product.
  3. HCI is being sold as a SAN replacement product: This is a misnomer, and a brief history lesson will serve us well. No technology is perfect, and despite their best intentions sometimes that new technology causes unexpected problems in the data centre. Think back to the phenomenon of virtual machine (VM) sprawl. In virtualisation’s heyday, it was not uncommon for companies to go from 50 physical servers to 75–100 VMs purely based on how easy it was to deploy them. And without the proper governance in place, it took organisations longer to realise their return on investment and total cost of ownership due to increased licence costs and the complexities of managing a larger VM estate.

 

Why selling HCI as a SAN replacement increases costs

One of the unexpected problems this sales technique produces is similar to what VM sprawl introduced.

The challenge is that most organisations have a plethora of storage in their environment, so positioning HCI as a SAN replacement means that one ends up having to propose so many additional nodes to take care of the storage that the other ends up with more compute and memory, which becomes unused.

This drives up licensing costs, data centre floor space requirements, and of course the overall price. This is a common argument for ‘HCI is a great technology but crazy-expensive’.

 

So what’s the solution?

While HCI undoubtedly addresses most of the complexities that organisations wrestle with on a daily basis, it’s not a perfect solution for large organisations’ storage challenges.

The answer is quite simple: the journey to a truly SDDC consists of moving all the traditional building blocks – namely compute, storage and networking – into a software-defined state. HCI takes care of the compute requirements, and software-defined storage (SDS) should be positioned as a complementary technology when a HCI solution is proposed. This drives down the number of nodes that must be positioned to address the storage requirements of the SDDC.

SDS is also a reference architecture with various solutions at different price points, be it file and block storage, object storage, or offerings that allow customers to virtualise all existing storage – regardless of the vendor – into a condensed SDS offering.

 

Closing thoughts

Don’t allow SDS to become the new ‘pushing water uphill’ technology – embrace it. Once the compute and storage are software-defined, wrapping a layer of software-defined networking around the solution should be a relatively simple task.

The old adage of picking the right tool for the job is more relevant than ever. And the most important process before suggesting any ‘tool’ to our clients is to understand what the ‘job’ is. Get it right and 2018 could be the year that the SDDC becomes a reality.

The drive to SDDC consists of moving all the traditional building blocks into a software-defined state.

When a technology changes business processes, simplifies infrastructures and gives IT the ability to become a profit centre, the adoption of the technology occurs despite the naysayers.

Alan Browning is solutions leader: HCI META (Middle East Turkey and Africa) at Lenovo.

 

Share This