Imagine carrying around a bag of cell phones everywhere because you needed to use a new and separate piece of hardware for every single app — games, email, etc. — that you currently run on a single smartphone today.

Seems crazy, but that’s essentially what our model for computing used to be, with standalone silos of hardware for separate applications. When an application wasn’t being used, the hardware and operating system dedicated to it was still sucking power and resources. It was highly inefficient. (In our bag of cellphones analogy, this would be like having one cellphone dedicated to games and still carrying it around with you everywhere all day — even though you only needed to do email during the day).

Then along came virtual machines (VMs), which allowed many applications to run on top of a single piece of hardware (by making it look like multiple physical computers). This led to anywhere from 3-10x more capacity utilization. Because the software could run independently of its underlying hardware, virtual machines in data centers meant we weren’t limited to one particular operating system; we could host both Linux and Windows on the same physical machine. The downside of that flexibility is that it introduced an entirely new layer of software (and an entirely new practice of systems management to care for it) that sat between the hardware and the operating system. Having what was essentially “an operating system on top of an operating system” caused some performance issues.

The holy grail here would have been virtualization, but with bare-metal performance … as if the operating system were running right on top of the CPU without the middle virtual layer.

That’s where containers come in. Containers are another way of isolating an application from the underlying hardware. They’re objects that serve the same purpose as a virtual machine — instantiations or clones of the host operating system with a self-standing, self-executing application inside — but they also provide bare-metal performance, because there’s no virtualization layer between.

Why now and what next?

Containers aren’t actually a new thing. They’ve been around for a long time, but are taking off for a few reasons. One reason is because Windows has become less prevalent in the datacenter; one of the downsides of containers compared to VMs is that they can’t run multiple operating systems, like Windows on top of Linux. Another reason is “microservices” app architecture driving enthusiasm for containers; these app architectures are especially suited to containers because they have discrete pieces of functionality that can scale independently, like LEGO building blocks.

System administrators and people deploying code find containerization convenient because every part of the application — including device drivers, operating systems, and other dependencies — is part of a single, self-contained entity. This also makes it very easy to host many containers and move them around, for redundancy/ failure tolerance, capacity, feature testing, and other reasons.

For me, the next step in containerization is treating the datacenter, with all its containers, like one giant computer or server. Many applications today are really just distributed systems: Applications aren’t necessarily confined to just one container. We might have an application that consists of ten containers running together. We could have 1,000 applications running across 10,000 containers. Or we might have a single big data job that involves multiple, interdependent applications.

So there needs to be a simple abstraction to run the operating environment and manage how all this stuff gets utilized for the right capacity, reliability, and performance (the key metrics by which a datacenter operates). And the key to that is comprehensive management of the entire operating environment. That’s what needs to happen.

— Peter Levine

 

 

 

Want more a16z?

Sign up to get the best of a16z content, news, and investments.

Thanks for signing up for the a16z newsletter.

Check your inbox for a welcome note.

MANAGE MY SUBSCRIPTIONS By clicking the Subscribe button, you agree to the Privacy Policy.