Solaris and containers may not seem like two words that go together—at least not in this decade. For the past several years, the container conversation has been all about platforms like Docker, CoreOS and LXD running on Linux (and, in the case of Docker, now Windows and Mac OS X, too).
But Solaris, Oracle’s Unix-like OS, has actually had containers for a long time. In fact, they go all the way back to the release of Solaris 10 in 2005 (technically, they were available in the beta version of Solaris 10 starting in 2004), long before anyone was talking about Linux containers for production purposes. And they’re still a useful part of the current version of the OS, Solaris 11.3.
Despite the name similarities, Solaris containers are hardly identical to Docker or CoreOS containers. But they do similar things by allowing you to virtualize software inside isolated environments without the overhead of a traditional hypervisor.
Even as Docker and the like take off as the container solutions of choice for Linux environments, Solaris containers are worth knowing about, too—especially if you’re the type of developer or admin who finds himself, against his will, stuck in the world of proprietary, commercial Unix-like platforms because some decision-maker in his company’s executive suite is still wary of going wholly open source….
Plus, as I note below, Oracle now says it is working to bring Docker to Solaris containers—which means containers on Solaris could soon integrate into the mainstream container and DevOps scene.
Below, I’ll outline how Solaris containers work, what makes them different from Linux container solutions like Docker, and why you might want to use containers in a Solaris environment.
The Basics of Solaris Containers
Let’s start by defining the basic Solaris container architecture and terminology.
On Solaris, each container lives within what Oracle calls a local zone. Local zones are software-defined boundaries to which specific storage, networking and/or CPU resources are assigned. The local zones are strictly isolated from one another in order to mitigate security risks and ensure that no zone interferes with the operations of another.
Each Solaris system also has a global zone. This consists of the host system’s resources. The global zone controls the local zones (although a global zone can exist even if no local zones are defined). It’s the basis from which you configure and assign resources to local zones.
Each zone on the system, whether global or local, gets a unique name (the name of the global zone is always “global”—boring, I know, but also predictable) and a unique numerical identifier.
So far, this probably sounds a lot like Docker, and it is. Local zones on Solaris are like Docker containers, while the Solaris global zone is like the Docker engine itself.
Working with Zones and Containers on Solaris
The similarities largely end there, however, at least when it comes to the ways in which you work with containers on Solaris.
On Docker or CoreOS, you would use a tool like Swarm or Kubernetes to manage your containers. On Solaris, you use Oracle’s Enterprise Manager Ops Center to set up local zones and define which resources are available to them.
Once you set up a zone, you can configure it to your liking (for the details, check out Oracle’s documentation), then run software inside the zones.
One particularly cool thing that Solaris containers let you do is migrate a physical Solaris system into a zone. You can also migrate zones between host machines. So, yes, Solaris containers can come in handy if you have a cluster environment, even though they weren’t designed for native clustering in the same way as Docker and similar software.
Solaris Containers vs. Docker/CoreOS/LXD: Pros and Cons
By now, you’re probably sensing that Solaris containers work differently in many respects from Linux containers. You’re right.
In some ways, the differences make Solaris a better virtualization solution. In others, the opposite is true. Mostly, though, the distinction depends on what is most important to you.
Solaris’s chief advantages include:
- Easy configuration: As long as you can point and click your way through Enterprise Manager Ops Center, you can manage Solaris containers. There’s no need to learn something like the Docker CLI.
- Easy management of virtual resources: On Docker and CoreOS, sharing storage or networking with containerized apps via tools like Docker Data Volumes can be tedious. On Solaris it’s more straightforward (largely because you’re splicing up the host resources of only a single system, not a cluster).
But there are also drawbacks, which mostly reflect the fact that Solaris containers debuted more than a decade ago, well before people were talking about the cloud and hyper-scalable infrastructure.
Solaris container cons include:
- Solaris container management doesn’t scale well. With Enterprise Manager Ops Center, you can only manage as many zones as you can handle manually.
- You can’t spin up containers quickly based on app images, as you would with Docker or CoreOS, at least for now. This makes Solaris containers impractical for continuous delivery scenarios. But Oracle says it is working to change that by promising to integrate Docker with Solaris zones. So far, though, it’s unclear when that technology will arrive in Solaris.
- There’s not much choice when it comes to management. Unlike the Linux container world, where you can choose from dozens of container orchestration and monitoring tools, Solaris only gives you Oracle solutions.
The bottom line: Solaris containers are not as flexible or nimble as Linux containers, but they’re relatively easy to work with. And they offer powerful features, especially when you consider how old they are. If you work with Oracle data centers, Solaris containers are worth checking out, despite being a virtualization solution that gets very little press these days.
Solaris Containers: What You Need to Know is published by the Sumo Logic DevOps Community. If you’d like to learn more or contribute, visit devops.sumologic.com. Also, be sure to check out Sumo Logic Developers for free tools and code that will enable you to monitor and troubleshoot applications from code to production.
About the Author
Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO.