A number of our customers are looking to transform the way that they store and manage their data. As we’ve gone through the process we reached a logical conclusion - traditional storage infrastructure is bloated and obsolete.
In todays datacenter one of the biggest challenges is getting the balance right between performance, capacity and cost. As performance requirements remain high and unstructured data volumes continue to grow at exponential rates, supporting both on traditional infrastructure is tough to say the least; there’s a direct conflict between high performance / high cost platforms and low cost / high capacity platforms?
Using the same infrastructure for both requirements just isn’t sustainable, as data volumes grow you end up storing large volumes on high cost, high performance storage, which isn’t the most efficient place to keep it.
So what’s the answer?
In my mind these two requirements have to be decoupled into a performance tier and a capacity tier, supplemented with a data lifecycle management layer that enables you to manage your data effectively.
The Performance Tier
There are a number of options to deliver the performance tier, ranging from traditional SAN, to all flash arrays, to more all encompassing hyper-converged vendors like Nutanix.
We believe that the choice should be driven by the workloads, but quite often this isn’t the case - with organisations deploying high performance platforms across all workloads. When you consider what they’re actually being used for these high performance systems tend to be overkill.
In our opinion you really need to get a proper understanding of the performance requirements of your workloads before deciding on an approach, you also need clarity over the longer term picture in terms of refresh cycles for compute and storage. Our belief is that hyper-converged platforms make most sense here as they enable you to deliver multiple performance points but on a single unified platform at scale.
The Capacity Tier
When we consider the capacity tier we’re really talking about unstructured data storage. Traditional NAS based approaches are hugely inefficient, in traditional environments with RAID and replication you’re storing between 3x - 5x more data than you need to. If you wind the clock back a few years though there was very little alternative.
Object and Cloud storage becoming more mainstream has changed that dynamic though.
They give you the ability to scale at lower cost and remove the complexity of data protection and management compared to traditional approaches - in many cases saving around 80% of the overall storage costs for unstructured data. Object / Cloud storage are geared towards high capacity and low cost instead of performance meaning we’re able to take advantage of low cost commodity hardware, with all of the intelligence held at the software layer.
New Management Requirements
Theoretically, one of the big benefits of using a single storage platform is that you minimise the management tools / overhead that you need. I say theoretically because in reality the chances are you still have separate tools for data protection, smaller branch office / project storage silos, and overlay technologies for things like collaboration and optimisation.
That being said, clearly separating the two data storage tiers means that there is a greater reliance on third party tool sets that provide visibility and control over your data - the key is to make sure that they’re vendor agnostic, working across storage platforms and hypervisors to avoid being locked in at the management level - as crazy as it sounds we’ve spoken with organisations that have such a huge investment in vendor specific management tools that they won’t even consider an alternative storage / compute provider.
The Next Generation Datacenter
I don’t think that what I’m advocating here is completely off base, and from discussions with our customers I think it’s widely recognised that the traditional approach to delivering infrastructure can’t continue.
New and disruptive storage and infrastructure vendors are entering the market, with traditional vendors failing to innovate, instead opting for acquisition - IBM’s planned acquisition of Cleversafe is a prime example. It’s these new players that are offering a credible alternative to deliver their infrastructure services - the common theme is that most of them are software defined.
In our opinion, enterprise datacenters of the future are more likely to resemble that of Facebook or Amazon with lower cost commodity hardware overlayed by intelligent, software defined services; not just for storage and compute but also for networking and security. These services will be linked by a management layer that manages orchestration and automation to deliver one unified infrastructure that can scale and morph in line with changing business requirements and will ultimately bleed into the use of public Cloud services.
If you'd like to understand more about where we see the market shifting towards, or you want to hear from any of our partners then why not attend our event in January or drop one of the team a line today on [email protected]