blog に戻る

2016年01月04日 Michael Churchman

Snowflake Configurations and DevOps Automation

DevOps snowflake configuration“Of course our pipeline is fully automated! Well, we have to do some manual configuration adjustments on a few of our bare metal servers after we run the install scripts, but you know what I mean…”

We do know what you mean, but that is not full automation. Call it what it really is — partial automation in a snowflake environment. A snowflake configuration is ad-hoc and “unique” to the the environment at large. But in DevOps, you need to drop unique configurations, and focus on full-automation.

What’s Wrong With Snowflake Configurations?

In DevOps, a snowflake is a server that requires special configuration beyond that covered by automated deployment scripts. You do the automated deployment, and then you tweak the snowflake system by hand.

For a long time (through the ‘90s, at least), snowflake configurations were the rule. Servers were bare metal, and any differences in hardware configuration or peripherals (as well as most differences in installed software) meant that any changes had to be handled on a system-by-system basis. Nobody even called them snowflakes. They were just normal servers.
But what’s normal in one era can become an anachronism or an out-and-out roadblock in another era -— and nowhere is this more true than in the world of software development. A fully-automated, script-driven DevOps pipeline works best when the elements that make up the pipeline are uniform. A scripted deployment to a thousand identical servers may take less time and run more smoothly than deployment to half a dozen servers that require manual adjustments after the script has been run.

For more on DevOps pipelines, see, “How to Build a Continuous Delivery Pipeline” ›

No Virtual Snowflakes …

A virtual snowflake might be at home on a contemporary Christmas tree, but there’s no place for virtual snowflakes in DevOps. Cloud-based virtual environments are by their nature software-configurable; as long as the cloud insulates them from any interaction with the underlying hardware, there is no physical reason for a set of virtual servers running in the same cloud environment to be anything other than identical. Any differences should be based strictly on functional requirements — if there is no functional reason for any of the virtual servers to be different, they should be identical.

Why is it important to maintain such a high degree of uniformity? In DevOps, all virtual machines (whether they’re full VMs or Docker) are containers in much the same way as steel shipping containers. When you ship something overseas, you’re only concerned with the container’s functional qualities. Uniform shipping containers are functionally useful because they have known characteristics, and they can be filled and stacked efficiently. This is equally true of even the most full-featured virtual machine when it is deployed as a DevOps container.
This is all intrinsic to core DevOps philosophy. The container exists solely to deliver the software or services, and should be optimized for that purpose. When delivery to multiple virtual servers is automated and script-driven, optimization requires as much uniformity as possible in server configurations.

For more on containers, see, “Kubernetes vs. Docker: What Does It Really Mean?” ›

What About Non-Virtual Snowflakes?

If you only deal in virtual servers, it isn’t hard to impose the kind of standardization described above. But real life isn’t always that simple; you may find yourself working in an environment where some or all of the servers are bare metal. How do you handle a physical server with snowflake characteristics? Do you throw in the towel, and adjust it manually after each deployment, or are there ways to prevent a snowflake server from behaving like a snowflake?
As it turns out, there are ways to de-snowflake a physical server — ways that are fully in keeping with core DevOps philosophy.

First, however, consider this question: What makes a snowflake server a snowflake? Is it the mere fact that it requires special settings, or is it the need to make those adjustments outside of the automated deployment process (or in a way that interrupts the flow of that process)? A thoroughgoing DevOps purist might opt for the first definition, but in practical terms, the second definition is more than adequate. A snowflake is a snowflake because it must be treated as a snowflake. If it doesn’t require any special treatment, it’s not a snowflake.

One way to eliminate the need for special treatment during deployment (as suggested by Daniel Lindner) is to install a virtual machine on the server, and deploy software on the virtual machine. The actual deployment would ignore the underlying hardware and interact only with the virtual system. The virtual machine would fully insulate the deployment from any of the server’s snowflake characteristics.

What if it isn’t practical or desirable to add an extra virtual layer? It may still be possible to handle all of the server’s snowflake adjustments locally by means of scripts (or automated recipes, as Martin Fowler put it in his original Snowflake Server post), running on the target server itself. These local scripts would need to be able to recognize elements in the deployment which might require adjustments to snowflake configurations, then translate those requirements into local settings and apply them. If the elements that require local adjustments are available as part of the deployment data, the local scripts might intercept that data as the main deployment script runs. But if those elements are not obvious (if, for example, they are part of the compiled application code), it may be necessary to include a table of values which may require local adjustments as part of the deployment script (if not full de-snowflaking, at least a 99.99% de-snowflaking strategy).

So, what is the bottom line on snowflake servers? In an ideal DevOps environment, they wouldn’t exist. In the less-than-ideal world where real-life DevOps takes place, they can’t always be eliminated, but you can still neutralize most or all of their snowflake characteristics to the point where they do not interfere with the pipeline.

For more on virtual machines, see, “Docker Logs vs Virtual Machine Logs” ›

Next Up

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Michael Churchman

Michael Churchman

Michael Churchman started as a scriptwriter, editor, and producer during the anything-goes early years of the game industry. He spent much of the ‘90s in the high-pressure bundled software industry, where the move from waterfall to faster release was well under way, and near-continuous release cycles and automated deployment were already de facto standards. During that time he developed a semi-automated system for managing localization in over fifteen languages. For the past ten years, he has been involved in the analysis of software development processes and related engineering management issues. He is a regular Fixate.io contributor.

More posts by Michael Churchman.

これを読んだ人も楽しんでいます