Promises of SONiC Network OS starts in























21 July 2022

Network infrastructure planning

Is Network Service Mesh a service mesh?

13 minutes reading

Is Network Service Mesh a service mesh?

Service mesh solutions are increasingly becoming an important element of modern applications built on the basis of microservices architecture. But here and there, you can also hear about a project called Network Service Mesh. Is it another variation on the service mesh theme or something completely different?

What does Network Service Mesh do?

A service mesh is a kind of “network” that connects all microservices within a given application. In fact, it is extra software that acts as an intermediary layer between services and provides a wide range of functions in the fields of security, observability, and traffic management, as well as reliability and resilience.

Assuming that your application is deployed in Kubernetes, you can take advantage of this type of functionality after one of the commonly-selected service mesh implementations - e.g. Linkerd, Istio, Kuma or Traefik Mesh - has been installed on top of your K8s cluster. It should be noted that a service mesh has an application-centric focus (mainly Layer 7 of the OSI model, with protocols like HTTPS or gRPC).

A service mesh can solve many challenges related to higher-level networking, especially in the context of east-west traffic handling within the Kubernetes cluster. But what about use-cases requiring lower-level networking functionality, e.g. L2/L3 network features or connectivity outside the K8s cluster domain? Kubernetes by itself does not provide a solution as it concentrates mainly on container orchestration functionality and not service meshes. 

diagram of comparison of service mesh and Network Service Mesh

Fig. 1 Comparison of a service mesh and Network Service Mesh - the network layers on which they work

Here comes Network Service Mesh (NSM), an open source project which is part of the Cloud Native Computing Foundation (CNCF). It aims to offer connectivity, observability, security, configurability, and discoverability for the lower layers of the network stack, on the so-called Network Service level (this concept has a specific meaning within NSM).

The role of a service mesh is to act as a proxy and secure connections between workloads deployed in the cluster, as well as to provide fine-grained control and insight over such traffic, mostly at L7. Whereas NSM focuses on L2/L3 processing according to the “policy” that has been defined within the Network Service(s) the given workload wants to consume.

As its name indicates, Network Service Mesh has been inspired by and has many analogies to the service mesh concept. However, it is definitely not another service mesh implementation but rather a parallel solution which, in fact, can interact well (in the sense it can be used in the same cluster) with a service mesh like Istio, for example.

NSM architecture

The NSM solution is not tied to a particular runtime domain (for example, it can be used in a VM context as well as in K8s) though in this article we focus on container environments, such as Kubernetes.

Network Service Mesh provides additional features to K8s, though it does not replace the existing K8s networking model, CNI. Instead, both CNI plugins and NSM can work in parallel. Also, NSM is complementary to traditional service meshes, as already mentioned. 

diagram of Network Service Mesh components in a Kubernetes environment

Fig. 2 Network Service Mesh components in a Kubernetes environment - high level view

Within the Network Service Mesh concept one can define the following elements: 

  • Network Service - is defined as a collection of connectivity, security, and observability features applied to traffic. In its most basic form, it is just a distributed L3 domain that allows the workloads to communicate via IP. 
  • Network Service Client (NSC) or simply Client - is an application workload which connects to Network Service (a Client can connect to many Network Services at the same time). A Client can be a Pod, VM or even a physical server.
  • Network Service Endpoint (NSE) or simply Endpoint - provides Network Service to a Client. Can be realized as a local Pod, a remote Pod (in a different cluster than the one where the Client Pod is located), a VM, any other function that processes packets, etc.
  • virtual Wire (vWire) - connects a Client to an Endpoint (carries frames/packets between the Client and the Endpoint). vWire provides simple functionality: a packet entering vWire at one end (ingress) will leave at the other end (egress).

Network Service Mesh components for Kubernetes environments are depicted in Fig. 2. (together with example Network Services). Their roles can be explained as follows:

  • Network Service Registry (NSR) - contains a list of available Network Services and Network Service Endpoints. Additionally, NSM architecture supports Registry Domains, allowing multiple independent registries to coexist.
  • Network Service Manager (NSMgr) - is a control plane component (deployed as a daemon set on the K8s cluster) responsible for forming a full mesh by establishing communication with other Network Service Managers within a given domain. It manages Network Service requests coming from clients’ Pods and the process of creating a vWire between the Client and the Endpoint.
  • Network Service Mesh Forwarder - a dataplane component, responsible for providing forwarding mechanisms. NSM can use forwarding solutions like VPP, SR-IOV, kernel networking, etc. 
  • Admission Webhook - Network Service Mesh uses the K8s Admission Controller approach to monitor deployment of Client Pods and reacts when they (i.e. the corresponding Clients’ manifest files) include annotations related to NSM. In such a case, Admission Webhook adds an NSM init container to the Pod which is responsible for setting up the requested Network Service (the NSM init container negotiates with NSMgr to accomplish this process and as a result a Network Service interface is injected into the Client Pod). The process is transparent from a Client Pod perspective.

To make it work, two API endpoints are added to K8s:

  • Network Service API - used to Request, Close, or Monitor vWire connections between the Client and Endpoint providing the requested Network Service.
  • Registry API - used to Register, UnRegister, and Find Network Services and the Network Service Endpoints that provide them.

Additionally NSM integrates with Spiffe/Spire to provide authentication and authorization functionality (this allows fine-grained security configuration, e.g. the workload can be connected only to the required Network Service(s) and is separated from any other).

NSM configuration in K8s

Network Service Mesh follows a cloud-native approach with a declarative configuration, allowing description of the intended network state (which is then deployed and applied) - one could say this is the standard “Kubernetes way”. An example Network Service configuration is presented below. 

kind: NetworkService
 payload: IP
   - source_selector:
       app: myapp
       version: "3.1"
       - destination_selector:
           service: sec-tunnel
   - source_selector:
       app: sec-tunnel
       - destination_selector:
           service: gateway
   - source_selector:
       - destination_selector:
           service: gateway

A Network Service object needs to have a name specified. You can also indicate the so-called Registry Domain for it by adding an ‘@’ suffix. Under the "spec" you can also define a payload type (either Ethernet or IP) and a list of matches. The matching is based on the labels configured for potential Clients. Using those labels, it is possible to define to which Endpoint, providing a given Network Service, the Client will be connected (through vWire).

No changes are necessary to applications (acting as Clients) that want to consume Network Services already registered in the NSM system - the Pod can leverage NSM features by declaring which Network Services they are part of. This can be done using an annotation, e.g.

apiVersion: v1
kind: Pod
 name: myapp
 annotations: "kernel://ns-first"

In the example above, the definition for the Pod named "myapp" requests that an additional kernel interface be injected into the Pod's network namespace and be connected to the Network Service called "ns-first".

Example use cases

NSM has been created to solve some limitations of existing networking models in cloud-native environments:

  • It supports multi-cluster connectivity and connectivity for hybrid environments (e.g. K8s and VMs).

    • Application workloads can connect to Network Service(s), independent of where they run. 
    • Applications can connect to multiple service meshes at the same time.
    • NSM can provide an inter-cluster connectivity domain for a service mesh like Istio.
  • It provides support for non-standard protocols (e.g. proprietary DB replication protocols).

  • It allows easy creation of Service Function Chaining (aka service composition).

  • In an NFV context, NSM can provide support for high bandwidth and highly configurable environments.

example of a hybrid environment with network services

Fig. 3 Example of a hybrid environment with Network Services (source: NSM docs)

These are example use cases. It is worth noting that NSM can support complex networking cases not possible with “standard” solutions (dealing only with higher layers of the networking stack).

How to start 

A good starting point for your first steps with NSM is the official setup documentation and related code repository. This assumes usage of the Kubernetes cluster as it is the easiest approach.

Deployment scripts and manifests support different types of K8s environments, including local ones, GKE (on GCP), AKS (on Azure), and EKS (on AWS). 

The official repository contains several examples for deployment configuration, starting with a basic deployment but also including more advanced NSM features and use cases. Based on those examples, you can build your own solutions (by taking and modifying the required elements). 


Is Network Service Mesh a service mesh? Well, strictly speaking, it is not. However, it is based on similar principles that are present in the original service mesh concept. Within NSM, they are expressed as connectivity, security, and observability features provided in the form of a so-called Network Service that can be consumed by given workloads.

Unlike service meshes, however, those features are not delivered through the use of L7 proxies. This is not required as NSM processes traffic in the lower layers of the network stack. From a Kubernetes networking perspective, NSM should be seen as a complementary solution (which co-exists with K8s CNI plugins and service mesh implementations deployed in the cluster) allowing the support of complex network use cases in a cloud-native fashion.

Original post date 08/24/2021, update date 07/21/2022.


Paweł Parol

Solutions Architect

Michał Pawłowski

Senior Network Engineer