blog に戻る

2015年09月12日 Chris Riley

Do Containers Become the DevOps Pipeline?

The DevOps pipeline is a wonderful thing, isn’t it? It streamlines the develop-and-release process to the point where you can (at least in theory, and often enough in practice) pour fresh code in one end, and get a bright, shiny new release out of the other. What more could you want?

That’s a trick question, of course. In the contemporary world of software development, nothing is the be-all, end-all, definitive way of getting anything done. Any process, any methodology, no matter how much it offers, is really an interim step; a stopover on the way to something more useful or more relevant to the needs of the moment (or of the moment-after-next). And that includes the pipeline.

Consider for a moment what the software release pipeline is, and what it isn’t. It is an automated, script-directed implementation of the post-coding phases of the code-test-release cycle. It uses a set of scriptable tools to do the work, so that the process does not require hands-on human intervention or supervision. It is not any of the specific tools, scripting systems, methodologies, architectures, or management philosophies which may be involved in specific versions of the pipeline.

Elements of the present-day release pipeline have been around for some time. Fully automated integrate-and-release systems were not uncommon (even at smaller software companies) by the mid-90s. The details may have been different (DOS batch files, output to floppy disks), and many of the current tools such as those for automated testing and virtualization were not yet available, but the release scripts and the tasks that they managed were at times complex and sophisticated. Within the limited scope that was then available, such scripts functioned as segments of a still-incomplete pipeline.

Today’s pipeline has expanded to engulf the build and test phases, and it incorporates functions which at times have a profound effect on the nature of the release process itself, such as virtualization. Virtualization and containerization fundamentally alter the relationship between software (and along with it, the release process) and the environment. By wrapping the application in a mini-environment that is completely tailored to its requirements, virtualization eliminates the need to tailor the software to the environment. It also removes the need to provide supporting files to act as buffers or bridges between it and the environment. As long as the virtualized system itself is sufficiently portable, multi-platform release shrinks from being a major (and complex) issue to a near-triviality.

Containerization carries its own (and equally clear) logic. If virtualization makes the pipeline work more smoothly, then stripping the virtual environment down to only those essentials required by the software will streamline the process even more, by providing a lightweight, fully-tailored container for the application. It is virtualization without the extra baggage and added weight.

Once you start stripping nonessentials out of the virtual environment, the question naturally arises — what does a container really need to be? Is it better to look at it as a stripped-down virtual box, or something more like a form-fitting functional skin? The more that a container resembles a skin, the more that it can be regarded as a standardized layer insulating the application from (and integrating it into) the environment. At some point, it will simply become the software’s outer skin, rather than something wrapped around it or added to it.

Containerization in the Pipeline

What does this mean for the pipeline? For one thing, containerization itself tends to push development toward the microservices model; with a fully containerized pipeline, individual components and services become modularized to the point where they can be viewed as discrete components for purposes such as debugging or analyzing a program’s architecture. The model shifts all the way over from the “software as tangle of interlocking code” end of the spectrum to the “discrete modules with easily identifiable points of contact” end. Integration-and-test becomes largely a matter of testing the relationship and interactions of containerized modules.

In fact, if all of the code moving through the pipeline is containerized, then management of the pipeline naturally becomes management of the containers. If the container mediates the application’s interaction with its environment, there is very little point in having the scripts that control the pipeline directly address those interactions on the level of the application’s code. Functional testing of the code can focus on things such as expected outputs, with minimal concern about environmental factors. If what’s in the container does what it’s supposed to do when it’s supposed to do it, that’s all you need to know.

At some point, two things happen to the pipeline. First, the scripts that constitute the actual control system for the pipeline can be replaced by a single script that controls the movement and interactions of the containers. Since containerization effectively eliminates multiplatform problems and special cases, this change alone may simplify the pipeline by several orders of magnitude. Second, the tools used in the pipeline (for integration or functional testing, for example) will become more focused on the containers, and on the applications as they operate from within the containers.

When this happens the containers become, if not the pipeline itself, then at least the driving factor that determines the nature of the pipeline. A pipeline of this sort will not need to take care of many of the issues handled by more traditional pipelines, since the containers themselves will handle them. To the degree that container do take over functions previously handled by pipeline scripts (such as adaptation for specific platforms), they will then become the pipeline, while the pipeline becomes a means of orchestrating the containers.

None of this should really be surprising. The pipeline was originally developed to automatically manage release in a world without containerization, where the often tricky relationship between an application and the multiple platforms on which it had to run was a major concern. Containerization, in turn, was developed in response to the opportunities that were made possible by the pipeline. It’s only natural that containers should then remake the pipeline in their own image, so that the distinction between the two begins to fade.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Chris Riley

Chris Riley

Chris Riley is a bad coder turned DevOps analyst. His goal is to break the enterprise barrier to modern development. He can be found on Twitter @HoardingInfo

More posts by Chris Riley.

これを読んだ人も楽しんでいます