Blog>>Networks>>SDN & NFV>>Uncontainerizable VNFs in a CNF environment

Uncontainerizable VNFs in a CNF environment

Cloud-native network functions (CNFs, for short) are a hot topic in network architecture. CNFs use containers as the base for network functions and thus would replace today’s most widely used standard, Virtual Network Functions (VNFs). In such a scenario, a container orchestration platform--Kubernetes, say--could be responsible not only for orchestrating the containers, but also for directing network traffic to proper pods. While this remains an area under research, it has aroused considerable interest among industry leaders. The Cloud Native Computing Foundation (CNCF)      link-icon and LF Networking (LFN)      link-icon have joined forces to launch the Cloud Native Network Functions (CNF) Testbed in order to foster the evolution from VNFs into CNFs. Moreover, Dan Kohn      link-icon, Executive Director of the CNCF, believes      link-icon that CNFs will be the next big thing in network architecture.

CodiLime’s research agenda

CodiLime’s R&D team is on top of the research on CNFs. We aspire to play an active role in this burgeoning area, and have developed two solutions for simultaneously harvesting the benefits of VNFs and CNFs. Given that these architectures are based on completely different concepts, this is no easy task. This blogpost will draw together these seemingly disparate points by presenting a technical solution that was used to solve a business case that called for the use of VNFs in a CNF environment. This topic was originally presented at the ONF Connect      link-icon held on 10-13 September in Santa Clara, California. In the follow up to this blog post, to be published in the last week of September, we will show how to use Tungsten Fabric to enable smooth communication between OpenStack and Kubernetes. Originally, this topic will be presented at ONS      link-icon, which is being held on 23-25 September in Antwerp, Belgium.

Harnessing the benefits of VNFs and CNFs all together

Let’s assume that you have the following business case. You have been using containers for their lightness and portability. Flexibility and resilience are the qualities you value the most. At the same time, you want to deploy apps in a simple way. You have been using Kubernetes to manage your resources. There remains just one thorny pain point: a business critical network function you use is a blackbox VM image that cannot be containerized. To solve this tricky matter we need to use containers while preserving a vital functionality run on VM.

The advent of virtualized network functions

Before we delve into the details of our solution, I’ll briefly discuss the main features of VNFs and CNFs. The concept of Network Function Virtualization (NFV, for short) was first introduced in 2012 by a group of service providers-- AT&T, BT, Deutsche Telekom, Orange, Telecom Italia, Telefónica and Verizon--in response to a problem they were having with network infrastructure. Back then, their services were based on hardware like firewalls or load balancers provided by specialized hardware companies. Everything was installed and configured manually. New services and network functions often required a hardware upgrade. All that was giving the telcos precious little in the way of flexibility. Their solution was to virtualize some network functions like firewalls, load balancers or intrusion detection devices and run them as Virtual Machines, thus eliminating the need for specific hardware. Thanks to this, new functions could be deployed more quickly, network scalability and agility increased and the use of network resources could be optimized. Good examples of VNFs include virtual Broadband Network Gateway (BNG      link-icon) and virtualized Evolved Packet Core (vEPC).

Uncontainerizable VNFs

Yet not every VNF can be moved into a Docker container, for a number of reasons. Sometimes VNF is a blackbox from some external providers (e.g. PAN-OS      link-icon as firewall). In such cases, it can only be run as it is without any modification or creating derivative works. VNF may also be based on an Operating System other than Linux, or VNF software is provided as an image of a whole hard drive. In other cases, VM-based networking functions do not use external kernels at all, but are constructed as Unikernels      link-icon. Such VNFs cannot be containerized and use the common CNF kernel on the host. Some networking functions are already battle tested and prepared as VM images, making it hard to change them. Firewall solution might be based on a kernel other than Linux (PF      link-icon on *bsd). VPN or other software can use Linux kernel modules which might not be available for the OS used on worker nodes. There may also be a dependency on older versions of MS SQL Server, MS Exchange or MS Active Directory which are not available as a Docker image. While newer versions of MS SQL Server can run on Linux, some software may still require an older version of this server.

Moving to CNFs

Cloud-native Network Functions (CNFs), on the other hand, are a new way of providing a required network functionality using containers. Here software is distributed as a container image and can be managed using container orchestration tools (e.g. Docker images in Kubernetes environment). Examples of CNFs include:

Business requirements of the solution

Let’s get back to business. We have a VNF that cannot be containerized, but we still want to use CNFs. So we need to create a converged setup for running containerized and VM-based network functions using a single user interface--the same type of definition for both containers and VMs--and controlled by a single API (Kubernetes). Naturally, there are additional requirements to be met:

  • The use of a single user interface - the same type of definition for both containers and VMs, and also controlled by a single API (Kubernetes)
  • There’s no need to install either Kubernetes on OpenStack or OpenStack on Kubernetes
  • There’s no need to manually maintain libvirt installation on top of Kubernetes
  • There’s no need to use separate libvirt instance per each VM with custom object definitions for VMs (which KubeVirt      link-icon requires)

Technical solution

To achieve this lofty goal in a technically mature way, we did the following:

  • Use Kubernetes and leverage its CRI mechanism to run VMs as pods
  • Configure nodes to use criproxy/Virtlet
  • Describe the VM using a pod definition passing the URL to a qcow2 image in place of the usual docker image for a container
  • Run such a pod as a part of Deployment/Replicaset/Daemonset
  • Enjoy it equally in the setups of different cloud providers!

In this technical solution, two elements play a crucial role: Virtlet      link-icon and the CRI (Container Runtime Interface)      link-icon. The CRI mechanism in Kubernetes is a plugin interface enabling the use of many different container runtimes, but without the need to do the recompilation. Virtlet, on the other hand, allows Kubernetes to see VMs as pods, i.e. in the same way that dockershim sees containers. Virtlet is also CRI-native. It has the same way of describing resource limits as for containers--that is, the definition of resource limits for containers and for VMs is identical and expressed by the same part of the yaml file. Virtlet also provides the same network configuration using CNI      link-icon (an interface for configuring container networks also used by Kubernetes). From this point we highly recommend using SR-IOV      link-icon to ensure the best performance for NFV workloads. SR-IOV runs qcow2-based VM images using a libvirt      link-icon toolkit to manage virtualization platforms. There is a single libvirt instance per node. Since it is CRI, all VMs will be seen in k8s as pods, so they can be part of other Kubernetes-native objects like Deployments, ReplicaSets, StatefulSets, etc.

What's happening on worker node
What's happening on worker node

Key findings

It is true that Virtlet requires a great deal of configuration work, but it will pay off thanks to the following advantages:

  • The same API handles both VMs and containers (e.g. kubectl logs/attach)
  • The resources for VMs and containers are defined in the same way
  • VM becomes a native object of Kubernetes and can be a part of DaemonSet, ReplicaSet, Deployment or of any other objects that have native k8s controllers
  • Virtlet supports the SR-IOV interface, which ensures the performance and latency levels required for NFV workloads
  • SR-IOV makes it possible to connect hardware directly with VMs
  • Multiple NICs can be defined using CNI multiplexers like CNI-Genie      link-icon. This feature is necessary to ensure that SR-IOV is used for user-facing traffic without conflicting with other network traffic such as intra-pod connectivity
Skamruk Piotr

Piotr Skamruk

Senior Software Engineer

Piotr is a long-time GNU/Linux and Forth language enthusiast, sys administrator and sys developer. He has worked on kernel sources, backend apps and even on frontends in a wide variety of languages. He worked on CoreOS RKT and was also a member of Virtlet Team, writing a Kubernetes runtime that made it...Read about author >

Read also

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

We guarantee 100% privacy.

Trusted by leaders:

Cisco Systems
Palo Alto Services
Equinix
Jupiter Networks
Nutanix