Blog>>Networks>>SDN & NFV>>Tungsten Fabric architecture — an overview

Tungsten Fabric architecture — an overview

SDN or Software-Defined Networking is an approach to networking that enables the programmatic and dynamic control of a network. It is considered the next step in the evolution of network architecture. To implement this approach effectively, you will need a mature SDN Controller such as Tungsten Fabric. Read our blog post to get a comprehensive overview of Tungsten Fabric architecture.

What is Tungsten Fabric

Tungsten Fabric (previously OpenContrail) is an open-source SDN controller that provides connectivity and security for virtual, containerized or bare-metal workloads. It is developed under the umbrella of the Linux Foundation      link-icon. Since most of its features are platform- or device agnostic, TF can connect mixed VM-container-legacy stacks. What Tungsten Fabric sees is only a source and target API. The technology stack that TF can connect includes:

  • Orchestrators or virtualization platforms (e.g. OpenShift, Kubernetes, Mesos or VMware vSphere/Orchestrator)
  • OpenStack (via a monolithic plug-in or an ML2/3 network driver mechanism)
  • SmartNIC devices
  • SR-IOV clusters
  • Public clouds (multi-cloud or hybrid solutions)
  • Third-party proprietary solutions

One of TF’s main strengths is its ability to connect both the physical and virtual worlds. In other words, to connect in one network different workloads regardless of their nature. They can be Virtual Machines, physical servers or containers.

To deploy Tungsten Fabric, you may need Professional Services (PS) to integrate it with your existing infrastructure and ensure ease of use and security.

Tungsten Fabric banner wersja 2

Tungsten Fabric components

The entire TF architecture can be divided into the control plane and data plane components. Control plane components include:

  • Config—managing the entire platform
  • Control—sending rules for network traffic management to vRouter agents
  • Analytics—collecting data from other TF components (config, control, compute)

Additionally, there are two optional components of the Config:

  • Device Manager—managing underlay physical devices like switches or routers
  • Kube Manager—observing and reporting the status of Kubernetes cluster

Data plane or compute components include:

  • vRouter and its agents—managing packet flow at the virtual interface vhost0 according to the rule defined in the control component and received using vRouter agents

TF Config—the brain of the platform

TF Config is the main part of the platform where network topologies are configured. It is the biggest TF component developed by the largest number of developers. In a nutshell, it is a database where all configurations are stored. All other TF components depend on the Config. The term itself has two meanings:

  • VM where all containers are stored
  • A container named “config” where the entire business logic is stored

TF Config has two APIs: North API (provided by Config itself) and South API (provided by other control plane components). The first one is more important here because it is the API used for communication. The South API is used by Device Manager (also a part of TF and discussed later) and other tools.

TF Config uses an intent-based approach. The network administrator does not need to define all conditions but only how the network is expected to work. Other elements are configured automatically. For example, you want to enable network traffic from one network to another. It is enough to define this intention, and all the magic is done under the hood.

The schema transformer listens to the database to check if there is a new entry. When such an entry is added, it checks for lacking data and completes it using the Northbound API. In this way, network routings are created, a firewall is unblocked to enable the traffic to flow between these two networks, and the devices obtain all the data necessary to get the network up and running. 

An intent-based approach automates network creation. There are many settings that need to be defined when creating a new network, and it takes time to set up all of them. As a process, it is also error-prone. Using TF simplifies everything, as most settings are default ones and are completed automatically.

When it comes to communication with Config, its API is shared via http. Alternatively, you can use a TF UI or cURL, a command line tool for file transfer with a URL syntax supporting a number of protocols including HTTP, HTTPS, FTP, etc. There is also a TF CLI tool.

Tungsten Fabric Config with OpenStack
Tungsten Fabric Config with Kubernetes

Managing physical devices with Device Manager

Device Manager is an optional component with two major functions. Both are related to fabric management, which is the management of underlay physical devices like switches or routers.

First, it is responsible for listening to configuration events from the Config API Server and then for pushing required configuration changes to physical devices. Virtual Networks, Logical Routers and other overlay objects can be extended to physical routers and switches. Device Manager enables homogeneous configuration management of overlay networking across compute hosts and hardware devices. In other words, bare-metal servers connected to physical switches or routers may be a part of the same Virtual Network as virtual machines or containers running on compute hosts.

Secondly, this component manages the life cycle of physical devices. It supports the following features:

  • onboarding fabric—detect and import brownfield devices
  • zero-touch provisioning—detect, import and configure greenfield devices
  • software image upgrade—individual or bulk upgrade of device software

Today only Juniper’s MX routers and QFX switches have an open-source plug-in      link-icon.

Device Manager: under the hood

Device Manager reports job progress by sending UVEs (User Visible Entities) to the Collector. Users can retrieve job status and logs using the Analytics API and it’s Query Engine. Device Manager works in full or partial mode. There can be only one active instance in the full mode. In this mode, it is responsible for processing events sent via RABBITMQ. It evaluates high-level intents like Virtual Networks or Logical Routers and translates them into a low-level configuration that can be pushed into physical devices. It also schedules jobs on the message queue that can be consumed by other instances running in partial mode. Those followers listen for new job requests and execute ansible scripts, which  push the desired configuration to devices.

Device Manager has the following components:

  • device-manager—translates high-level intents into a low-level configuration
  • device-job-manager—executes ansible playbooks, which configure routers and switches
  • DHCP server—in a zero-touch provisioning use case, physical device gets management IP address from a local DHCP server running alongside device-manager
  • TFTP server—in the zero-touch provision use case, this server is used to provide a script with the initial configuration

Kube Manager

Kube Manager is an additional component launched together with other Tungsten Fabric SDN Controller components. It is used to establish communication between Tungsten Fabric and Kubernetes, and is essential to their integration. In a nutshell, it listens to the Kubernetes API server events such as creation, modification or deletion of k8s objects (pods, namespaces or services). When such an event occurs, Kube Manager processes it and creates, modifies or deletes an appropriate object in the Tungsten Fabric Config API. Tungsten Fabric Control will then find those objects and send information about them along to the vRouter agent. After that, the vRouter agent can finally create the correctly configured interface for the container. 

The following example should clarify this process. Let’s say that an annotation is added to the namespace in Kubernetes, saying that the network in this namespace should be isolated from the rest of the network. Kube Manager gets the information about it and changes the setup of the TF object accordingly.

Control

The Control component is responsible for sending network traffic configurations to vRouter agents. Such configurations are received from the Config’s Cassandra database, which offers consistency, high availability and easy scalability. To represent the configuration and operational state of the environment, the IF-MAP (The Interface to Metadata Access Point) protocol is used. The control nodes exchange routes with one another using IBGP protocol to ensure that all control nodes have the same network state. Communication between Control and vRouter agents is done via Extensible Messaging and the Presence Protocol (XMPP)—a communications protocol for message-oriented middleware based on XML. Finally, the Control communicates with gateway nodes (routers and switches) using the BGP protocol.

TF Control works similarly to a hardware router. Control is a control plane component responsible for steering the data plane and sending the traffic flow configuration to vRouter agents. For their part, hardware routers are responsible for handling traffic according to the instructions they receive from the control plane. In TF architecture, physical routers and their agent services work alongside vRouters and vRouter agents, as Tungsten Fabric can handle both physical and virtual worlds.

TF Control communicates with a vRouter using XMPP, which is equivalent to a standard BGP session, though XMPP carries more information (e.g. configurations). Still, thanks to its reliance on XMPP, TF Control can send network traffic configurations to both vRouters and physical ones—the code used for communication is exactly the same.

Tungsten Fabric Control

Analytics

Analytics is a separate TF component that collects data from other components (config, control, compute). The following data are collected:

  • Object logs (concrete objects in the TF structure)
  • System logs
  • Trace buffers
  • Flow statistics in TF modules
  • Status of TF modules (i.e. if they are working and what their state is)
  • Debugging data (if a required data collection level is enabled in the debugging mode)

Analytics is an additional component of Tungsten Fabric. TF works fine without it using just its main components. It can even be enabled as an additional plugin long after the TF solution was originally deployed.

To collect the data coming from other TF components, an original Juniper protocol called Sandesh is used. “Sandesh” means “message” or “news”. Analogically, the protocol is the messenger that brings news about the SDN.

>> Read our gentle introduction to SDN.

In the Analytics component, there are two databases. One is based on the Cassandra database and contains historical data: statistics, logs, TF data flow information. It is commonly used for Analytics and Config components. Cassandra is the database that allows you to write data quickly, but it reads data more slowly. It is therefore used to write and store historical data. If there is a need to analyze how TF deployment worked over a longer period of time, this data can be read. In practice, such a need does not occur very often. This feature is most often used by developers to debug a problem.

The second database is based on the Redis database and collects UVE (User Visible Entities) such as information about existing virtual networks, vRouters, virtual machines and about their actual state (whether it’s working or not). These are the components of the system infrastructure defined by users (in contrast to the elements created automatically under the hood by TF). Since the data about their state are dynamic, they are stored in the Redis database, which allows users to read them much more quickly than in the Cassandra database. 

All these TF components send data to the Collector, which writes them in either the Cassandra or Redis database. On the other side, there is an API Server which is sometimes called the Analytics API to distinguish it from the API Server, e.g. in the Config. This Analytics API provides a REST API for extracting data from the database.

Apart from these, Analytics has one additional component, called QueryEngine. This is an indirect process taking a user query for historical data. The user sends an SQL-like query to the Analytics API (API Server) REST port. Then the query is sent to QueryEngine, which performs a database query in Cassandra and, via the Analytics API, sends the result back to the user.

 Figure 4 shows the Analytics Node Manager and Analytics Database Node Manager. In fact, there are many different node managers in the TF architecture that are used to monitor specific parts of the architecture and send reports about them. In our case, Analytics Node Manager monitors Collector, QueryEngine and API Server, while the Analytics Database Node Manager monitors databases in the Analytics component. In this way, Analytics also collects data on itself.

Tungsten Fabric Analytics

The VRouter forwarder and agent

This component is installed on all compute hosts that run the workload. It provides Integrated routing and bridging functions for network traffic from and between Virtual Machines, Containers and external networks. It applies network and security rules defined by the Tungsten Fabric controller. This component is not mandatory, but it is required for any use case with virtualized workloads. 

  • Agent

The agent is a user-space application that maintains XMPP sessions with the Tungsten Fabric controllers. It is used to get VRF (Virtual Routing and Forwarding) and ACLs (Access Control Lists) that are derived from high-level intents like Virtual Networks. The agent maintains a local database of VRFs and ACLs. This component reports its state to the Analytics API by sending Sandesh messages with UVEs (User Visible Entities) with logs and statistics. It is responsible for maintaining the correct forwarding state in Forwarder. The agent also handles some protocols like DHCP, DNS or ARP.

Communication with the forwarder is achieved with the help of a KSync module, which uses Netlink sockets and shared memory between the agent and the forwarder. In some cases, application and kernel modules also use the pkt0 tap interface to exchange packets. Those mechanisms are used to update the flow table with flow entries based on the agent’s local data.

  • Forwarder

The forwarder performs packet processing based on flows pushed by the agent. It may drop the packet, forward it to the local virtual machine, or encapsulate it and send it to another destination.

The forwarder is usually deployed as a kernel module. In that case, it is a software solution independent of NIC or server type. Packet processing in kernel space is more efficient than in user-space and provides some room for optimization. The drawback is that it can only be installed with a specific supported kernel version. For advanced users, modules for a different kernel version can be built. Default kernel versions are specified here      link-icon.

This kernel module is released as a docker image that contains a pre-built module and user-space tools. When this image is run, it copies binaries to the host system and installs the kernel module on the host (it needs to be run in privileged mode). After successful installation, a vrouter module should be loaded into the kernel (“lsmod | grep vrouter”) and new tap interfaces pkt0 and vhost0 created. If problems occur, checking the kernel logs (“dmesg”) can help you arrive at a solution.

The forwarder can also be installed as a userspace application that uses The Data Plane Development Kit (DPDK), which enables higher performance than the kernel module.

  • Packet flow

For every incoming packet from a VM, vRouter forwarder needs to decide how to process it. The options are DROP, FORWARD, MIRROR, NAT or HOLD. Information about what to do is stored in flow table entries. The forwarder is using packet headers to find a corresponding entry in the above-mentioned tables. With the first packet from a new flow, the entry might be empty. In that case, the vRouter forwarder sends this packet to the pkt0 interface, where the agent is listening. Using its local information about VRFs and ACLs, the agent pushes (using KSync and shared memory) a new flow to the forwarder and resends a packet. In other words, the vRouter forwarder doesn’t have full knowledge of how to process every packet in the system so it cooperates with the agent to get that knowledge. It is because this process may take some time that the first packet sent through the vRouter may come with a visible delay.

Tungsten Fabric Compute with OpenStack
Tungsten Fabric Compute with Kubernetes

Tungsten Fabric with OpenStack and Kubernetes—an overview

To sum up, Figures 7 and 8 provide an overview of the TF integration with Openstack and Kubernetes, respectively.

Tungsten Fabric with Openstack
Tungsten Fabric with Kubernetes
Krasuski Szymon

Szymon Krasuski

Software Engineer

Szymon is a Software Engineer at CodiLime. On a daily basis he develops networking solutions for cloud infrastructures, usually based on Kubernetes. He’s also a huge fan of cinematography and architecture.Read about author >
Marchewka Paweł

Paweł Marchewka

Software Engineer

Paweł is a software engineer with eight years of experience in a variety of technologies. At CodiLime, he currently works with Kubernetes and Software Defined Networking solutions in a project that requires both developer and operations skills. In the past, he worked on low-level embedded software in C as...Read about author >
Sykulski Piotr

Piotr Sykulski

Software Engineer

Piotr is a Software Engineer at CodiLime. His day-to-day work includes extending Kubernetes’ capabilities by writing operators in Go. After hours, Piotr enjoys playing board and online games with his friends.Read about author >
Zadrożny Paweł

Paweł Zadrożny

Senior Software Engineer

Paweł Zadrożny is a Software Engineer at CodiLime. His background includes broad backend experience, including development, implementation and maintenance of applications and websites and deployment of finished products. He is well versed in various programming languages, including Python, Go, JavaScript, and...Read about author >
Drzewiecki Łukasz

Łukasz Drzewiecki

Senior Network Engineer

Łukasz Drzewiecki is a Senior Network Engineer with twenty years of experience. He has worked on projects involving development of automation tools, supporting customers with complex multi-environment deployments with SmartNICs, and development of components for mobile services. He has a strong DevOps...Read about author >

Read also

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we'll get back to you within one business day.