Container technology: opportunity and challenge for network engineers

containers networking

Table of Contents

In previous years, physical limitations such as space and electric power, forced data centers to adopt virtualization strategies. Virtual servers redefined what we know about the Data Center and added a new layer of abstraction.

Just as Virtual Machines (VM) transformed the way we manage information, so is container technology beginning to make a significant impact in data centers.

Containers are generally compared to Virtual Machines. Both technologies are similar because they allow the abstraction of the workload from the underlying hardware, but they are completely different because of their architectures.

Container technology can provide portability (something lacking in physical environments) so an application can be packed and moved easily. It can also improve application interoperability (legacy environment had OS version-to-version incompatibilities) and can also help to reduce expenses.

Containers are often considered potentialpotential lightweight alternatives to hypervisor technologies. In a virtual machine environment each instantiation needs a full Operating System (OS) to function.

However, with container technology, the abstraction occurs only at the OS level, and it doesn’t need the entire system just like in VMs. Containers share the same OS of the host so they are thinner and more effective than a hypervisor.

Developers need flexibility to build, transfer and run distributed applications. Containers might be the key, because they can providethe following benefits to developers and system administrators:

  1. Portability: Containers can allow an application to run in different environments without changing. They have the ability to abstract the underlying infrastructure from the application.
  2. DevOps: Similar to VMs, once an application in containerized, a snapshot can be created to maintain the integrity of the content, also providing an easy way to roll back to a previous status.
  3. Density: Getting the most value from the avaialble infrastructure.  

Many businesses have started to move their server apps from VMs to Linux containers. Docker and CoreOS are the leading open-source container platforms that can automate the deployment of Linux applications into a container.

Both platforms vary in capabilities but they provide the same advantages, portability, DevOps and density. A developer using any of these platforms can containerize each of the components of a complex app, distribute it over multiple servers, and still provide efficient communication.  

Opportunity and challenge for network engineers

Although app developers have been supporting and implementing container technology, the increasing networking challenges have incidentally involved network engineers.

Networking features in Docker were first designed for developers to allow simple connectivity between different components of an application. The only concern of Docker was to provide an IP address, a network interface and let containers communicate with each other. However, in a production environment where security or scalability is fundamental, networking becomes one of the most important aspects in container technology and that is when the networking pro’s needed to be included.

Containers may have a big impact on the network performance. For example a developer working on an application could load many containers using default Docker networking features and unintentionally add more endpoints to the network. Distributed applications need basic networking capabilities, such as scalability, performance and security.

Networking in containers

Docker has gained recent popularity because it was the first to make container deployment accessible to everyone. Many people make containers equivalent to Docker for a good reason, however there is a lot more going on than just a single company.

Except for testing scenarios, almost all container implementations require basic interconnectivity. Docker and rkt (from CoreOS) have a similar approach when providing networking. Both come with predefined networking set-ups created by default and can use third-party plugins (sometimes the best option). There are specific ways that containers can use to be interconnected with each other, with VMs and with bare metal servers.

Which network setup shall I use with containers?

It depends on the application requirements, performance and workload position (whether is private or public cloud).

The following is a list of the common network practices and a brief description:

  1. None: This method is often called Loopback, because it gives a loopback interface to the container but misses to provide an external network interface. Normally used in testing scenarios.
  2. Bridge: Using a Linux bridge, the containers can be connected directly to a physical network.  This method provides single-host networking; the IP addresses assigned to the containers are not accessible from outside of the host. With a bridged network only a small network within a single node can be created.
  3. Native Networking: This is the native Docker method that allows the network namespace of a container to be shared with the local host.
  4. Overlay: By creating an overlay network (a virtual network built on top of the underlying network infrastructure), a significantly larger network can be created compared to bridged networks. Overlay networks are becoming the popular choice in solutions because of its simplicity. A single network of containers can be spanned over multiple hosts using a networking tunnel such as VXLAN.
  5. Underlay: Sometimes the network environment might need containers that have IP addresses that belong to the same subnet as the underlay networks.

To accomplish all this, Docker has a MACVLAN driver that can bind itself to a host interface. Each container gets its unique MAC and IP address of the physical network that the host node belongs to.

Container Networking Modules

To deal with the container networking, Docker proposes the Container Networking Model “CNM”, as shown in Figure 1, a specification that involves three main elements:

  1. Endpoints
  2. Network
  3. Sandbox


Figure 1

CNM was first introduced with libnetwork, which is a multi-platform library for networking containers. Libnetwork has the ability to use different basic Linux kernel network libraries that exist on the host, so that every container can connect to the network. Libnetwork provides either native or remote drivers that facilitate connectivity for containers.

After CNM was proposed by Docker, another big name in the industry proposed a new model. CoreOS came up with a new networking model for rkt (a container networking proposal). Container Networking Interface (CNI) was introduced by rkt the first time and is another container networking model that is also gaining popularity in the industry.

In CNI each container or pod, is considered a unique network namespace. CNI can handle communication within the network differently, depending on the network plug-in, as shown in Figure 2. Each plug-in in CNI needs to make the necessary changes on the host and provide a network interface to the network namespace. It should also use an appropriate IPAM plug-in to assign an IP to the interface.

Figure 2


A comparison of popular Container Network solutions

Although Docker and CoreOS have driven the industry of containers for quite a long time, some emerging projects and vendors ensure improved networking solutions on the technology:

  1.  Kubernetes: proposed by Google, it is an open-source platform used for containerized applications to automate deployment, scaling and management. Kubernetes’s approach to networking is on pod-to-pod communications and is based on CNI model. While, for rkt (kubelnet from Kubernets) manages all networking set-ups and configurations.
  2.  Contiv: proposed by Cisco, Contiv is an open-source project that specifies infrastructure (physical or virtual) operational policies for containers. Contiv networking is a container plugin that can provide infrastructure and security policies for a multi-tenant microservices implementation. Contiv uses the remote driver and IPAM available in Docker.
  3.  Project Calico: it is a plain layer-3 method to virtual networking. The main concept of Calico is that the data streams should be routed not encapsulated. Calico can integrate with cloud-based orchestration systems (like Kubernetes or OpenStack) and allow secure IP communication between containers, VMs or bare metals. Calico integrates a driver for libnetwork that can support networking for Docker containers.
  4.  Weave Works: open-source container networking solution for (on-premises, cloud or hybrid) environments. The idea behind WeaveNet is that one router container is initiated on every node that needs interconnectivity. After this is accomplished, every weave router sets up tunnels with each other. Weave creates a virtual network that allows Docker containers to connect across multiple nodes.
  5.  Open Virtual Network “OVN: it is an open-source created by Open vSwitch that provides a protocol (OpenFlow) for network virtualization. OVN can be used to connect containers or VMs to private L2 or L3 networks. OVN is integrated with Docker’s libnetwork to provide network virtualization.
  6.  Apache Mesos: it allows networking support for containers.  This solution can allow IP-per-container and service discovery. Mesos integrates the CNI specification from CoreOS.
  7.  Flannel: another container networking open-source project from CoreOS, originally designed for Kubernetes. Flannel is a virtual network that provides a subnet to each host for using with container runtimes. Flannel is base on CNI specification.
  8.  Cloud Foundry: an open-source solution for container networking. The proposal is grounded around VXLAN overlay networks. The recent netman-release 0.6.0 is a pluggable container networking stack, using Flannel. CloudFoundry has adopted Docker’s runtime.  


Containers have come to the mainstream tremendously fast and they are here to stay.

They have become crucial for application development and management, something that goes along with most technology seen today. When Docker released its tool set to create containers, tech giants such as Google, Microsoft and Amazon (to mention a few) have adopted the technology.

Departing from Docker in 2013, container solutions have started to spawn all over the place with incremental improvements. For now, it is still difficult to tell which solution is the best for which scenario, as the technology is still somehow not completely mature. For example CoreOS improved existing networking security issues, but Docker has been a longer player, so its supporting community is stronger.

Containers are also going to be seen around hybrid cloud environments because are capable of improving workload portability between public and private clouds. Besides from the list above, we expect to see more open-source projects integrating with container technology.

4/5 - (2 votes)

Diego Asturias

Diego Asturias

Diego is a full-time entrepreneur and researcher. Passionate for technology and innovation with more than 8 years in the networking engineering industry. Experience in Chinese and South Korean telecommunications, such as Huawei and Samsung. CCNA and AWS certified and still searching for new frontiers.

What do you think about this article?

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

About us

RouterFreak is a blog dedicated to professional network engineers. We
focus on network fundamentals, product/service reviews, and career advancements.