Move over convergence; IT is all about hyper-convergence nowadays. But what is hyper-convergence and why should you care? Kathy Gibson looked for some answers.
The IT industry is one of massive change, with new technologies and products coming to market all the time.
This has led to a tremendous amount of complexity within the data centre – and the products developed to help administrators to manage this complexity often add to it.
Christo Briedenhann, regional director at Simplivity, explains that the move to hyperconvergence ushers in a new and disruptive way of doing IT.
Boiling it down to its most basic, he explains that hyperconvergence brings all storage, virtualisation and processing power back into one box, helping to simplify the virtualised infrastructure.
“Our IT budgets are going round in circles, with servers, storage and maintenance; and we keep adding new resources to the existing resources,” he points out.
“IT needs to support the business – especially when budgets are tight – and companies are under pressure to reach out to new markets. In South Africa there are additional pressures on IT such as uncertain power supply.
“The reality is that with all the challenges of having to support the business in new ventures – in a tough environment – 80% of IT budgets are flat or down; and only 20% of budgets are available for the innovation and new projects that can help to drive the business,” Briedenhann says.
“So after spending so much money, with data centres filled with amazing technology, users are turning to the cloud to get what they need. They are using things like Dropbox, Amazon Web Services, Google Docs and OneDrive – and often these are not approved by the IT department.”
The problem, he explains, is that technology simply can’t keep up with data growth. “Data is literally exploding – by 2020 there will be 44 zetabytes of data being used. Even if storage capacity grows, we can’t read and write that fast anymore.”
Virtualisation burst on the scene some years ago to help companies maximise their data centre investments, but Briedenhann points out that, in many instances, it hasn’t really made the CIO’s job any easier. “In fact, virtualisation makes IT more difficult to manage.”
The industry’s classic answer is to develop new technology, and bring new products to market. Currently, there’s a lot of focus on deduplication, compressing and optimising data.
“We believe that data storage is creating a big problem,” Briedenhann says. “And to overcome the problem we fill up our data centres with more products and technologies – all of which need skills, rack space and cooling. And this leads to massive complexity.”
“Hyperconverged Infrastructure for Dummies” defines hyperconvergence at its highest level as a way to enable cloud-like economics and scale without compromising the data centre’s performance, reliability and availability.
Hyperconverged infrastructure, the book states, provides significant benefits;
- Data efficiency – reducing storage, bandwidth and IOPS requirements;
- Elasticity – it’s easier to scale out/in resources as required by business demands;
- VM-centricity – the virtual machine (VM) or workload is the cornerstone of enterprise IT;
- Data protection – it’s key that data can be restored in the event of loss or corruption and this is made easier with hyperconvergence;
- VM mobility – greater application/workload mobility is enabled;
- High availability – better availability than in legacy systems is enabled; and
- Cost efficiency – a sustainable step-based model eliminates waste.
So why does hyperconvergence matter? “It lets businesses get to market faster,” Breidenhann explains. “Data centres can be deployed or moved quickly, while backup and other functions can be vastly sped up. Performance will increase; and the cost of management reduced while being made easier.
“Using hyperconvergence, companies can reduce costs; and they can increase operational efficiency, spending time on strategy and innovation instead of just running technology.”
Hyperconvergence is what you get when you’ve successfully implemented a software-defined data centre. Because it’s based on software it gives companies flexibility and agility from their IT resources.
It gives organisations a cloud-lie experience, with faster time to value and lower total cost of ownership, but lets the IT department keep control of performance, high availability and reliability.
“Hyperconverged Infrastructure for Dummies” outlines 10 things that hyperconvergence can do for companies:
- Software focus – it’s flexible because new features can be added without ripping and replacing infrastructure;
- Use of commodity x86 hardware – this lowers the cost and means that IT departments can allow for failover;
- Centralised systems and management – compute, storage, backup to disk, cloud gateway functionality and more are combined in a single shared resource pool with hypervisor technology, so they can be managed across individual nodes as a single federated system;
- Enhanced agility – all resouces in all physical data centres reside under a single administrative umbrella, so its easy to migrate workloads;
- Scalability and efficiency – a smaller step size means more efficient usage of resources;
- Low cost – the cost of entry of hyperconverged infrastructure is much less that legacy infrastructure;
- Easy automation – with combined resouces and centralised management, administration functionality includes scheduling opportunities and scripting options;
- Focus on VMs – policy revolves around VMs, along with management options like data protection;
- Shared resouces – many kinds of applications can be deployed in a single shared resource pool, allowing for efficient use of resources for improved capacity and performance;
- Data protection – hyperconvergence helps IT organisations to do comprehensive back-up and recovery, with affordable disaster recovery; efficient protection without data rehydration and re-deduplication; and a single centralised console that allows IT to respond quickly.
The path to hyperconvergence, Briedenhann says, lies in proving its value to solve specific problems, then transforming the data centre to improve operational efficiency; and finally revolutionising IT with TCO savings.
Lean IT drives operational efficiency
In many ways, IT has failed to use technology to provide maximum efficiency and return on investment (ROI).
Manufacturing plants, says Hamut Pascha, director: global financial services at Simplivity EMEA, are better models of automation and efficiency. “Plants tend to be highly flexibility, with higher agility and less costs. This is because they have introduced Lean management and technical innovations.
“They are highly automated, less labour intensive, more flexible and agile. They respond to business requirements, and shift of resources for higher quality.”
In the data centre, however, more components mean it has become more complex. “So you need more skilled people to understand and manage these components
This is one of the biggest challenges for CIOs. And they are under increasing pressure, which gets worse with each economic crisis.”
However, CIOs can start using Lean IT in the data centre, Pascha says. “ROI is the translation of technical advantage into economic benefits. In the IT infrastructure, this means you want to avoid capital spend on technology refresh of servers and storage.
“For IT operations, you need to avoid backup faults, saving time saving on disaster recover, and lowering the opportunity cost.
“The business implications include reduced downtime (planned and unplanned), and less IT user waiting time. With higher performance, IT can give back productivity time.”
Pascha points out that a Lean approach could help CIOs to properly calculate the net asset value of IT and see where the budget goes. “A redistribution of IT budget lets more money be spent on innovation,” he says. “And an IT strategy aligned with future business requirements means there will be more time and budget available for innovations which will drive the business.”