There’s a lot of talk at conferences and on tech blogs around converged and hyper-converged infrastructure. At its most simple, both of these terms refer to solutions that package up elements of infrastructure (compute, storage, and network) into single solutions that aim to avoid single points of failure (particularly storage and networking) while simplifying the provisioning and supporting of infrastructure. In this short post, we’ll look at the move within the industry to hyper-converged solutions and what this means for enterprises.
One of the issues with both converged and hyper-converged infrastructure is that the definition across the industry often differs. In an industry not short of complexity, this can make it very difficult for potential buyers to dig beyond the hype.
Traditional infrastructure to converged infrastructure
The general aim of convergence is to reduce complexity within the data center. A converged solution includes components such as servers, storage, network, and more. This allows businesses to go to one supplier rather than several and buy a single piece of a kit containing the core components to deliver cloud infrastructure.
A potential benefit here is that there’s less work in building and integrating a system than with a traditional setup, where a company would buy everything they need to deliver a cloud solution separately, connect it together and build it out from there. Providers such as VCE (a joint venture between VMware, Cisco, and EMC) began releasing converged solutions to reduce the risk, time to market, and complexity involved in building out cloud solutions.
A potential issue for those looking to implement converged and hyper-converged solutions is the need to review the team’s skills matrix. As with many significant changes in infrastructure, it can bring about the need for upskilling, retraining, and new staff which can cause unexpected disruption and/ or costs.
Converged to hyper-converged infrastructure
Within the transition from traditional to converged, the core components didn’t change, they were just packaged up collectively. Hyper-converged solutions, however, provide a single tier of computing, storage, and network. It’s modular and integrates a hypervisor with software-defined storage and software-defined networking, reducing the number of components to be managed within the data center.
It’s a further simplification of the data center, which may result in less physical setup and potentially shorter lead times, however, due to the fact that every company has their own interpretation of what hyper-converged means, making every solution different, it’s difficult to categorically say.
Because a hyper-converged appliance is a single tier, there’s local storage for each server, which can also be shared across a whole hyper-converged cluster. This collective pool of storage across hyper-converged nodes, which is often referred to as a virtual SAN, can be divided up and utilized as smaller volumes as required.
The simplicity of the solution lies in the fact that, when you need more resources, you bolt-on a whole new “block” of infrastructure like a Lego brick, however, this could also prove to be an issue…
As it’s an all-in-one solution, scalability can be an issue in that, should you need additional compute, you need an entire block of infrastructure (with additional storage and network capacity) to increase the compute. For specific types of projects and well-balanced systems that scale steadily, this might not be an issue as a whole tier may always be required. This modular approach could well lock businesses into using a set hyper-converged solution; these single tiers might work brilliantly together, but how would you go about moving away from them once you’ve got, say 30 tiers up to and running? It also assumes that your requirements won’t change. Let’s say you’ve got a new, particularly resource-intensive piece of software to run that requires double the amount of storage that you’re used to. A hyper-converged solution in this instance would restrict you in terms of hardware choice.
The important thing to remember with any new solution in the market is that, while they might make the data center simpler and easier to consume, for now, the fundamentals of cloud technology remain the same. As simple as it sounds, issues faced with your current infrastructure won’t necessarily be solved by updating the hardware. Again, this depends on the situation and your goals.
If you’re upgrading part or all of your data center and you’re keen to explore what hyper-converged infrastructure could do for your operations, be sure to create a clear internal specification before approaching vendors. This will make it far easier to decipher between the different solutions and how they might help your business.
Converged and hyper-converged solutions tend to be hardware-only solutions with the option of managed services coming from pre-approved vendor partners who might deliver additional services alongside the hardware, so again this might be a sticking point for businesses.
So the pattern we’ve seen so far is that converged solutions attempt to shake up the way we build and manage cloud infrastructure. This is interesting to see and might well be a lifesaver for particular companies and projects, however, it’s not going to get rid of all of your problems overnight. The best practice is always going to prevail and there’s no better solution to doing things the right way.
Looking to upgrade your infrastructure?? The process always starts with personnel, help your team level up with the most sought after Hyperconverged Training & Certification with Nutanix Certifications. Get in touch with us and find out the best approach for your team and your business.