[Storage Switzerland] DOCKER: What do Storage Pros need to know?

Docker was created to solve the problems that organizations face when they implement server virtualization on a wide scale; overhead and inefficiency. These challenges occur because virtualization is a sledgehammer to the problem it was designed to solve; allow multiple applications to run simultaneously on the same physical hardware in such a way that if one application fails the rest of the applications are not impacted. This is the real goal of virtualization, isolation of applications so that a misbehaving application does not impact another application or its resources.

The other “goals” like consolidation, mobility, improved data protection are not goals at all, they are just outcomes of achieving the real goal. The problem is that virtualization places two significant taxes on the data center in achieving its primary goal of isolated applications. These taxes have led to a new form of virtualization, called containers, and Docker is the container solution getting most of the attention.

The Virtualization Taxes

The first tax that virtualization places on the data center is that it requires an entire server to be virtualized in order for it to achieve the isolation goal. The problem is that what most data centers actually want is application or operating system isolation, not server isolation. In order to achieve application isolation, the entire operating system and the server’s hardware are virtualized. The impact of hundreds of virtual machines, each running a duplicate set of base operating system processes, consumes memory and CPU resources.

The second tax is the performance overhead of the hypervisor. Applications simply will not perform as well when virtualized as they would on bare metal systems. For a great many applications this overhead is a non-issue because they don’t require any more performance than what the hypervisor can give them. For some, however, the performance impact can be noticeable, especially when the application needs to interface with something external to the hypervisor like networks or storage. In these instances the hypervisor needs to translate between the abstracted virtual world and the real data center. The use of more powerful processors can offset this hypervisor overhead but it is still less efficient than the native, bare metal use case.

The Docker Container

Docker and other container technology is essentially application virtualization. It creates a virtual instance of the application or even parts of that application and isolates it by creating multiple copies of the operating system’s users space, the location in which applications normally run. But all the containers run on the same server. Docker allows each container to share the operating system memory and processes so they do not need to be run separately for each application. Today Docker is Linux based and runs Linux applications, but support for Windows is on the way and Microsoft is engaged with the Docker community. In the meantime, companies like DH2i are creating container technologies for Windows applications. There is also work going on an open container project that promises compatibility between various container implementations. No matter what environment is common in the data center, containers will be an option and storage administrators need to be prepared for their arrival.

[to continue, click HERE]

Leave a Reply