Is bare metal infrastructure relevant in a DevOps world? The cloud has reduced hardware to little more than a substrate for the pool of resources that is the cloud itself. Those resources are the important part; the hardware is little more than a formality.
Or at least that’s been the standard cloud-vs-metal story, until recently. Times change, and everything that was old does eventually become new again — usually because of a combination of unmet needs, improved technology, and a fresh approach. And the bare-metal comeback is no different.
Unmet Needs
The cloud is a pool not just of generic, but also shared resources (processor speed, memory, storage space, bandwidth). Even if you pay a premium for a greater share of these things, you are still competing with the other premium-paying customers. And the hard truth is that cloud providers can’t guarantee a consistent, high level of performance.
Cloud performance depends on the demand placed on it by other users — demand which you can’t control. If you need reliable performance, there is a good chance that you will not find it in the cloud. This is particularly true if you’re dealing with large databases; Big Data tends to be resource-hungry, and it is likely to do better on a platform with dedicated resources down to the bare-metal level, rather than in a cloud, where it may have to contend with dueling Hadoops.
The cloud can present sticky compliance issues, as well. If you’re dealing with formal data-security standards, such as those set by the Securities and Exchange Commission or by overseas agencies, verification may be difficult in a cloud environment. Bare metal provides an environment with more clearly-defined, hardware-based boundaries and points of entry.
Improved Technology
Even if Moore’s Law has been slowing down to sniff the flowers lately, there have been significant improvements in hardware capabilities, such as increased storage capacity, and the availability of higher-capacity solid state drives, resulting in a major boost in key performance parameters.
And technology isn’t just hardware — it’s also software and system architecture. Open-source initiatives for standardizing and improving the hardware interface layers, along with the highly scalable, low-overhead CoreOS, make lean, efficient bare metal provisioning and deployment a reality. And that means that it’s definitely time to look closely at what bare metal is now capable of doing, and what it can now do better than the cloud.
A Fresh Approach
As technology improves, it makes sense to take a new look at existing problems, and see what could be done now that hadn’t been possible (or easy) before. That’s where Docker and container technology come in. One of the major drawbacks of bare metal in comparison to cloud systems has always been the relative inflexibility of available resources. You can expand such things as memory, storage, and the number of processors, but the hard limit will always be what is physically available to the system; if you want to go beyond that point, you will need to manually install new resources.
If you’re deploying a large number of virtual machines, resource inflexibility can be a serious problem. VMs have relatively high overhead; they require hypervisors, and they need enough memory and storage to contain both a complete virtual machine and a full operating system. All of this requires processor time as well. In the cloud, with its large pool of resources, it isn’t difficult to quickly shift resources to meet rapidly changing demands as virtual machines are created and deleted. In a bare-metal system with hardware-dependent resources, this kind of resource allocation can quickly run up against the hard limits of the system.
Docker-based deployment, however, can radically reduce the demands placed on the host system. Containers are built to be lean; they use the kernel of the host OS, and they include only those applications and utilities which must be available locally. If a virtual machine is a bulky box that contains the application being deployed, plus plenty of packing material, a container is a thin wrapper around the application. And Docker itself is designed to manage a large number of containers efficiently, with little overhead.
On bare metal, the combination of Docker, a lean, dedicated host system such as CoreOS, and an open-source hardware management layer makes it possible to host a much higher number of containers than virtual machines. In many cases, this means that bare metal’s relative lack of flexibility with regard to resources is no longer a factor; if the number of containers that can be deployed using available resources is much greater than the anticipated short-to-medium-term demand, and if the hardware resources themselves are easily expandable, then the cloud really doesn’t offer much advantage in terms of resource flexibility.
In effect, Docker moves bare metal from the “can’t use” category to “can use” when it comes to the kind of massive deployments of VMs and containers which are a standard part of the cloud environment. This is an important point — very often, it is this change from “can’t use” to “can use” that sets off revolutions in the way that technology is applied (most of the history of personal computers, for example, could be described in terms of “can’t use”/”can use” shifts), and that change is generally one in perception and understanding as much as it is a change in technology.
In the case of Docker and bare metal, the shift to “can use” allows system managers and architects to take a close look at the positive advantages of bare metal in comparison to the cloud. Hardware-based solutions, for example, are often the preferred option in situations where access to dedicated resources is important. If consistent speed and reliable performance are important, bare metal may be the best choice. And the biggest surprises may come when designers start asking themselves, “What can we do with Docker on bare metal that we could do with anything before?”
So, does Docker make bare metal relevant? Yes it does, and more than that, it makes bare metal into a new game, with new and potentially very interesting rules.
About the Author
@mazorstorn Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry, working on the prototype for the ground-breaking laser-disc game Dragon’s Lair. He spent much of the 90s in the high-pressure bundled software industry, where near-continuous release cycles and automated deployment were already de facto standards; during that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues.
Sources
source:”IBM 700 logic module” by Autopilot – Own work. Licensed under CC BY-SA 3.0 via Commons – https://commons.wikimedia.org/wiki/File:IBM_700_logic_module.jpg#/media/File:IBM_700_logic_module.jpg