In today’s digital economy, markets and customer purchasing behaviours are changing. Customers expect everything to be available online, anytime, anywhere.

By Morne Bekker, South African country manager and district manager for the SADC region at NetApp

To satisfy these expectations, enterprise IT departments must be able to react quickly to changing business needs – while continuing to manage the mission-critical legacy workloads that “keep the lights on.” Enterprise IT staff face a daily multitude of operational challenges that increase the pressure on the infrastructure they manage.

To meet these challenges, enterprises must undergo a process of digital transformation. IT teams are moving away from traditional infrastructure and the “old data world” to a more flexible infrastructure that has the agility, scalability, predictability, and automation to react to changing business needs. A key imperative is to accomplish this transformation without risking well-established business operations or sacrificing resiliency and reliability.

As such, smart enterprise IT teams are looking to the highly flexible architecture of a next-generation data centre (NGDC) to enable transformation. This architecture ushers in a “new data world,” meeting changing business needs while also supporting the infrastructure and virtualisation requirements of more traditional enterprise architectures.

 

The challenge presented by the traditional data centre

One of the biggest challenges in any data centre is delivering predictable performance, especially in the face of proliferating applications and services, many of which can be extremely resource intensive. The old data world – using traditional storage architectures – is beset by three main challenges: predictability, scalability, and agility. These challenges limit your ability to consolidate workloads, eliminate noisy neighbours, and meet performance requirements.

 

The old data world doesn’t address the real problem

In the old data world, the only means used in an attempt to meet these challenges is to overprovision shared infrastructure or to create dedicated infrastructure silos for important applications.

As you know all too well, the impact that either approach has on your data centre and your IT team is significant. Hardware virtualisation goes down, your infrastructure sprawls, and capital costs go up.

As the complexity of your environment rises, IT productivity falls, increasing your operating costs. And even with all that effort, the results are mixed, leaving your IT team struggling to keep up with the business.

 

The old data world is inflexible, disruptive and expensive

In the old data world, scaling up storage is a complicated undertaking. Conventional storage scales only within narrow limits. You might be able to add capacity, but that extra capacity doesn’t necessarily translate to more performance.

It can be difficult to predict when performance will reach its limit, making infrastructure planning a challenge. As a result, the tendency is to overprovision storage up front to avoid surprises later, but that approach can mean paying ahead for resources that might sit idle for months or years.

When a storage system reaches its limit, you either rip it out and replace it with a more powerful system, or you add another separately managed array. The whole process is inflexible, disruptive, and expensive, creating a management burden that drives up operating costs. When it comes to compute, many enterprises still have some applications that run on bare-metal servers.

In many cases, these servers are themselves specialised and different from the server hardware that is used in virtual environments. These applications are siloed on their own infrastructure and don’t benefit from the flexibility that virtualisation provides. Managing and scaling compute for these applications pose many of the same problems as scaling traditional storage.

 

Progressing to the next-generation data centre

IT infrastructure is supposed to be a means to an end, not an end itself. In the old data world with traditional IT infrastructure, however, your IT team must continually attend to monitoring, management, and maintenance. These tasks can keep your IT team from focusing enough attention on the applications and services that are needed to move your business forward.

As a solution, the NetApp Data Fabric is a key advantage of a next-generation data centre that is built on NetApp technology. The NetApp Data Fabric allows you to seamlessly transfer your data across multiple environments – whether on premises or in a public or hybrid cloud.

Share This