Blog>>Networks>>Kubernetes networking

BLOG / ... / Kubernetes networking

details

Kubernetes networking explores connectivity and communication aspects within Kubernetes deployments, ensuring the smooth operation of containerized applications.

Thumbnail of an article about Why service mesh matters: understanding the benefits of microservices networking
NETWORKS
CLOUD

Why service mesh matters: understanding the benefits of microservices networking

Building applications as distributed systems, especially with microservices architecture, is quickly becoming the new norm of software development. Microservices, when used in the right situation, can ensure that your application will be easy to scale, update, and fix. If you aim to create a cloud-native app, then microservices are typically your best choice. However, when you start looking into distributed applications, one potential problem stands out, and that is the issue of communication between the multiple services in your application.
Thumbnail of an article about From Kubernetes Ingress to Kubernetes Gateway API
CLOUD
NETWORKS

From Kubernetes Ingress to Kubernetes Gateway API

If you've ever touched on application networking in Kubernetes, it's more than likely you've come across Ingress. However, it is worth knowing that Ingress has a worthy successor in the form of Kubernetes’ Gateway API. If you want to get familiar with this new API, this article is what you need. Ingress is a Kubernetes API object that has been widely used for many years. It allows you to handle traffic entering the Kubernetes cluster from outside and to route it to multiple Services running in the cluster.
Thumbnail of an article about Envoy configuration in a nutshell: Listeners, Clusters and More
NETWORKS
CLOUD

Envoy configuration in a nutshell: Listeners, Clusters and More

In the previous blog post, I briefly discussed what Envoy Proxy is and where it can be used. If you’re not familiar with Envoy I strongly suggest reading the previous piece first. This text is meant for developers or devops engineers who want to learn more about how to make the most of its functionality. We will discuss how Envoy Proxy actually works and how it should be configured. Let’s start with a simple example. This demonstrates the most common situation when the client initiates a connection with Envoy Proxy as it tries to reach the server.
Thumbnail of an article about Handling L4/L7 traffic with Envoy proxy — Introducing Envoy
NETWORKS
CLOUD

Handling L4/L7 traffic with Envoy proxy — Introducing Envoy

One of the most crucial qualities an experienced developer should have is knowing how to avoid reinventing the wheel. When creating a web application, there are a few common functionalities that you need to provide no matter what your application does or what technology you use. Usually you want your application to at least support: Secure connection (TLS), Authentication, High availability, Load balancing, Circuit breaking, Canary deployments, Observability, Rate limiting. In this blog post, I will tell you about Envoy proxy - a solution which will not only provide you with the functionalities described above but also with many other neat features.
Thumbnail of an article about Service mesh vs. Kubernetes Ingress — what is the difference?
NETWORKS
CLOUD

Service mesh vs. Kubernetes Ingress — what is the difference?

Service mesh and Ingress are two solutions used in the area of ​​application networking in Kubernetes. In this article you will see what characterizes each of them and understand where the real difference between them is. A service mesh is a kind of special “system” for communication between applications, different components of an application based on microservices architecture, or between various other workloads running in virtual environments, such as Kubernetes. The solution provides a rich set of features in the fields of traffic management, reliability, resilience, security, and observability.
Thumbnail of an article about What is a service mesh — everything you need to know
NETWORKS
CLOUD

What is a service mesh — everything you need to know

A service mesh is an increasingly popular solution in the area of ​​application networking, in Kubernetes and other environments. If you are still not familiar with the concept, in this article you will find everything you need to know before taking a deeper dive. Over the past few years, we have seen a shift away from approaches based on monolithic code when designing software applications. Instead, modern design is based on microservices architecture. At the end of the day, it is about delivering basically the same business logic, not in the form of a large monolith but as a collection of loosely coupled and independently deployable services.
Thumbnail of an article about How to build CNFs using Ligato framework
NETWORKS
CLOUD

How to build CNFs using Ligato framework

Cloud native network functions (CNFs) are a hot topic today. In this blog post, I will take a stab at explaining why and also present the Ligato framework, which allows you to build your custom CNFs. We started talking about Virtual Network Functions (VNFs) a few years ago when the concept of Network Function Virtualization (NFV) appeared. In short, it is that network functions can be deployed as virtual machines (VMs) instead of being delivered on dedicated hardware offered by vendors. Over time, telco operators and service providers launched their first field trials and then roll-outs of network functions based on this paradigm.
Thumbnail of an article about Tungsten Fabric as a Kubernetes CNI plugin
NETWORKS

Tungsten Fabric as a Kubernetes CNI plugin

CNI (Container Networking Interface) is an interface between container runtime and network implementation. It allows different projects, like Tungsten Fabric, to provide their implementation of the CNI plugins and use them to manage networking in a Kubernetes cluster. In this blog post, you will learn how to use Tungsten Fabric as a Kubernetes CNI plugin to ensure network connectivity between containers and bare metals. You will also see an example of a nested deployment of a Kubernetes cluster into OpenStack VM with a TF CNI plugin.
Thumbnail of an article about How to create a custom resource with Kubernetes Operator
NETWORKS
CLOUD

How to create a custom resource with Kubernetes Operator

While developing projects on the Kubernetes platform I came across an interesting problem. I had quite a few scripts that ran in containers and needed to be triggered only once on every node in my Kubernetes cluster. This could not be solved using default Kubernetes resources such as DaemonSet and Job. So I decided to write my own resource using Kubernetes Operator Framework. How I went about it is the subject of this blog post. When I confronted this problem, my first thought was to use a DaemonSet resource that utilizes initContainers and then starts a dummy busybox container running `tail -f /dev/null` or another command that does nothing.
arrow

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

We guarantee 100% privacy.

Trusted by leaders:

Cisco Systems
Palo Alto Services
Equinix
Jupiter Networks
Nutanix