Blog>>Cloud>>Kubernetes

BLOG / ... / Kubernetes

details

Kubernetes is an open source platform, which orchestrates containerized applications, simplifying scaling, deployment, and management. K8s streamlines the deployment and scaling of applications, ensuring high availability and efficient resource utilization in modern cloud-native environments.

Thumbnail of an article about How to build CNFs using Ligato framework
NETWORKS
CLOUD

How to build CNFs using Ligato framework

Cloud native network functions (CNFs) are a hot topic today. In this blog post, I will take a stab at explaining why and also present the Ligato framework, which allows you to build your custom CNFs. We started talking about Virtual Network Functions (VNFs) a few years ago when the concept of Network Function Virtualization (NFV) appeared. In short, it is that network functions can be deployed as virtual machines (VMs) instead of being delivered on dedicated hardware offered by vendors. Over time, telco operators and service providers launched their first field trials and then roll-outs of network functions based on this paradigm.
Thumbnail of an article about Deploying a Kubernetes operator in OpenShift 4.x platform
CLOUD

Deploying a Kubernetes operator in OpenShift 4.x platform

Contrail-operator is a recently released open-source Kubernetes operator that implements Tungsten Fabricas a custom resource. Tungsten Fabric is an open-source Kubernetes-compatible, network virtualization solution for providing connectivity and security for virtual, containerized or bare-metal workloads. An operator needed to be adjusted to the OpenShift 4.x platform, which introduced numerous changes to its architecture compared with previous versions. In this blog post, you’ll read about three interesting use cases and their solutions.
Thumbnail of an article about How to create a custom resource with Kubernetes Operator
NETWORKS
CLOUD

How to create a custom resource with Kubernetes Operator

While developing projects on the Kubernetes platform I came across an interesting problem. I had quite a few scripts that ran in containers and needed to be triggered only once on every node in my Kubernetes cluster. This could not be solved using default Kubernetes resources such as DaemonSet and Job. So I decided to write my own resource using Kubernetes Operator Framework. How I went about it is the subject of this blog post. When I confronted this problem, my first thought was to use a DaemonSet resource that utilizes initContainers and then starts a dummy busybox container running `tail -f /dev/null` or another command that does nothing.
Thumbnail of an article about How to build a test automation framework in the cloud
QUALITY ASSURANCE
CLOUD

How to build a test automation framework in the cloud

Have you ever wondered how to set up a test automation framework directly in the cloud? Well, in this blog post you will learn about everything you’ll need to successfully create such a framework. We’re going to look at the pros and cons of preconfigured testing environments and those that are created dynamically. We’ll then show you how to include software testing in a CI/CD pipeline and achieve high level automation. Finally, we’ll break down what a message broker is and how it can be used when creating a testing architecture.
Thumbnail of an article about Security in Kubernetes — overview of admission webhooks
CLOUD

Security in Kubernetes — overview of admission webhooks

This blog post is a continuation of two previous posts on security mechanisms in Kubernetes. If you have not yet read them, click here for part 1 and part 2 to see how you can provide an adequate level of security in Kubernetes deployments. Existing admission controllers are very useful, as they allow you to validate or modify requests to a Kubernetes API server. However, they have two limitations: They have to be compiled into an API server and can be configured only on the API server startup. The flexibility of admission webhooks helps solve these problems.Once enabled, their behavior depends on the special application running inside the Kubernetes cluster.
Thumbnail of an article about How to make your Kubernetes cluster secure
CLOUD

How to make your Kubernetes cluster secure

In the last couple of years Kubernetes (K8s) has become one of the most popular tools for running containerized applications. Many cloud companies, major ones among them, have adopted it to orchestrate their container-based workloads. Given its popularity, the problem of K8s security is becoming even more pressing. Read our two-part blog post series on how to make a Kubernetes cluster secure. Part one provides a brief history of virtualization, presents admission controllers and how they work and shows how Pod Security Policies, a powerful admission controller, can help you manage user actions on Kubernetes cluster.
Thumbnail of an article about The benefits of Pod Security Policy — a use case
CLOUD

The benefits of Pod Security Policy — a use case

In the previous post I looked at how security is handled in Kubernetes. Let’s now see how it works in practice. One of the most powerful controllers is the Pod Security Policy admission controller (PSP). PSP is a cluster-level security mechanism that enables control over sensitive aspects of pod specification. It allows you to define a set of conditions a pod must meet in order to be accepted into the system.The following use case illustrates how it works. Let’s assume that we have a shared file system
Thumbnail of an article about Kubernetes: what is it and how you can use it (part 1/2)
CLOUD

Kubernetes: what is it and how you can use it (part 1/2)

Kubernetes is an open-source system for container orchestration enabling automated application deployment, scaling and management. Read this two-part blog post to understand the business perspective on Kubernetes. I will present a brief story of virtualization methods, the key concepts on which Kubernetes is built and how it can help your business when it comes to running containerized applications. The second part covers six main reasons to adopt Kubernetes. First, let’s take a look at the market data on the adoption of Kubernetes.
Thumbnail of an article about How to use NVIDIA GPUs with Kubernetes — CodiLime approach
CLOUD

How to use NVIDIA GPUs with Kubernetes — CodiLime approach

The combination of NVIDIA GPUs, to allow computing power to be harnessed, and Kubernetes, responsible for managing containerization, may seem like a perfect marriage of two complementary tools, and an obvious solution. Yet, at the technical level, this combination, like many marriages, turned out to be more tricky than might have been expected. Read this blogpost to find out how CodiLime managed to find a way to deal with this matter. Let’s introduce the main characters then: NVIDIA GPUs (Graphic Processing Units) are powerful tools used to accelerate computationally-intensive tasks.
arrow

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

We guarantee 100% privacy.

Trusted by leaders:

Cisco Systems
Palo Alto Services
Equinix
Jupiter Networks
Nutanix