Header Background Image

Cloud-native

CodiLime Glossary

Some common networking terms clearly explained

Glossary /C /

Cloud-native

Cloud-native is an approach to building applications that are deployed directly into a cloud infrastructure. Applications built in this approach are easily portable—they can be run on different operating systems and are not tied to a particular machine. 

What does cloud-native mean? 

Applications built with this approach are easily portable—they can be run on different operating systems (based on a Linux kernel) and are not tied to a particular machine. Cloud-native applications can be built with different languages, runtime, and use the most suitable frameworks for a particular functionality. That allows for better control over the whole application and greater flexibility in choosing the right technology. The cloud-native approach can be used for microservices. Instead of creating a single large app, we split it into smaller chunks called microservices. Each of them is responsible for performing a single task, e.g., managing user login or stock inventory. These chunks communicate with each other even though they might be written in different languages or use different technologies. Cloud-native is also connected with the serverless concept. The application’s developers do not set up servers and manage them to deploy their software. Instead, these tasks fall to a service provider that charges for the computing resources used. Physical servers are used, but their management is shifted to the cloud provider. This allows developers to focus on the code and push it into production faster.

Benefits of building cloud-native applications

Cloud-native applications are stateless, meaning they don’t store data generated by a client during one session to use them in the next session with the same client. On the other hand, stateful applications do store such data and use them when the same client starts a new session (e.g. log in via a web portal). Stateless applications have several advantages. They are easily scalable both vertically and horizontally. They can be cached easily, and reap an attendant boost in speed. Finally, a stateless application needs less storage and is not bound to a particular server. To work properly it needs only the information provided by a client in one session.

Read more:

Thumbnail of an article about Kubernetes: what is it and how you can use it (part 1/2)
CLOUD

Kubernetes: what is it and how you can use it (part 1/2)

Kubernetes is an open-source system for container orchestration enabling automated application deployment, scaling and management. Read this two-part blog post to understand the business perspective on Kubernetes. I will present a brief story of virtualization methods, the key concepts on which Kubernetes is built and how it can help your business when it comes to running containerized applications. The second part covers six main reasons to adopt Kubernetes. First, let’s take a look at the market data on the adoption of Kubernetes.
Thumbnail of an article about Six reasons you may need a Managed Cloud Service Provider
CLOUD

Six reasons you may need a Managed Cloud Service Provider

According to Forrester data, 2019 will be the year when companies begin moving their core apps and operations into the cloud. As many companies have already seen, there are numerous benefits of cloud transformation and multiple vendors to choose from. According to RightScale, a full 91% of companies already use public cloud, 72% have used a private cloud and 58% of companies employ a multi-cloud strategy. With 91% of the organizations surveyed by CompTIA using some form of cloud computing, it is safe to say that companies are getting more and more cloud-reliant.
Thumbnail of an article about Seamlessly transitioning to CNFs with Tungsten Fabric
NETWORKS

Seamlessly transitioning to CNFs with Tungsten Fabric

Cloud-native Network Functions (CNFs), by all appearances, seem to be the next big trend in network architecture. They are a logical step forward in the evolution of network architecture. Networks were initially based on physical hardware like routers, load balancers and firewalls. Such physical equipment was then replaced by today’s standard, VMs to create Virtualized Network Functions (VNFs). Now, a lot of research is going into moving these functions into containers. In such a scenario, a container orchestration platform would be responsible for hosting CNFs.
Thumbnail of an article about Uncontainerizable VNFs in a CNF environment
NETWORKS

Uncontainerizable VNFs in a CNF environment

Cloud-native network functions (CNFs, for short) are a hot topic in network architecture. CNFs use containers as the base for network functions and thus would replace today’s most widely used standard, Virtual Network Functions (VNFs). In such a scenario, a container orchestration platform--Kubernetes, say--could be responsible not only for orchestrating the containers, but also for directing network traffic to proper pods. While this remains an area under research, it has aroused considerable interest among industry leaders.
Thumbnail of an article about How to create a custom resource with Kubernetes Operator
NETWORKS
CLOUD

How to create a custom resource with Kubernetes Operator

While developing projects on the Kubernetes platform I came across an interesting problem. I had quite a few scripts that ran in containers and needed to be triggered only once on every node in my Kubernetes cluster. This could not be solved using default Kubernetes resources such as DaemonSet and Job. So I decided to write my own resource using Kubernetes Operator Framework. How I went about it is the subject of this blog post. When I confronted this problem, my first thought was to use a DaemonSet resource that utilizes initContainers and then starts a dummy busybox container running `tail -f /dev/null` or another command that does nothing.
Thumbnail of an article about Deploying a Kubernetes operator in OpenShift 4.x platform
CLOUD

Deploying a Kubernetes operator in OpenShift 4.x platform

Contrail-operator is a recently released open-source Kubernetes operator that implements Tungsten Fabricas a custom resource. Tungsten Fabric is an open-source Kubernetes-compatible, network virtualization solution for providing connectivity and security for virtual, containerized or bare-metal workloads. An operator needed to be adjusted to the OpenShift 4.x platform, which introduced numerous changes to its architecture compared with previous versions. In this blog post, you’ll read about three interesting use cases and their solutions.

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

We guarantee 100% privacy.

Trusted by leaders:

Cisco Systems
Palo Alto Services
Equinix
Jupiter Networks
Nutanix