Blog>>Cloud>>Kubernetes>>Utilizing vCluster for Deploying Developer Environments

Utilizing vCluster for Deploying Developer Environments

In the fast-paced world of software development, creating isolated environments for testing and development purposes is crucial. These environments must be efficient, scalable, and easy to set up and tear down. One solution that is gaining traction in the DevOps community for such purposes is vCluster.

What is vCluster?

vCluster      link-icon is a tool that allows for the creation of lightweight, ephemeral Kubernetes clusters within existing clusters. These clusters can be quickly created and removed, making them ideal for creating isolated environments for various purposes. The vClusters run on top of real Kubernetes clusters and reuse their resources, which are synced between the virtual cluster and the underlying cluster.

Advantages of using vCluster

  • Cost efficiency
    vCluster significantly reduces costs compared to deploying separate "real" clusters, as it utilizes existing resources more effectively.
  • Rapid provisioning
    The ability to launch new clusters quickly accelerates development and testing workflows.
  • Low overhead
    By default, vCluster utilizes the lightweight Kubernetes distribution K3s, minimizing resource consumption.
  • Isolation and security
    vCluster provides excellent isolation compared to namespaces, ensuring the security of each environment.
  • Cluster scope admin access
    Users have cluster-level admin access within vCluster environments, offering greater control.
  • No conflicting CRDs
    vCluster eliminates conflicts with Custom Resource Definitions (CRDs), streamlining the environment setup process.

How does vCluster work?

vCluster works by combining a Go binary and Helm charts for easy deployment. The vCluster binary uses Helm releases to make creating new vClusters simple. If you're using K3s (the default vCluster K8s distribution), each new virtual cluster is made as a StatefulSet, which has two main parts:

  1. Kubernetes API distribution
    vCluster defaults to utilizing K3s, a lightweight Kubernetes distribution. This choice guarantees that each virtual cluster possesses a dedicated Kubernetes API server, empowering autonomous management and operation.
  • vCluster daemon
    The vCluster daemon assumes a pivotal role in orchestrating resource synchronization between the virtual cluster and the underlying cluster. It fosters seamless communication and resource management, ensuring operations within the vCluster environment accurately and efficiently mirror those in the underlying infrastructure.

While K3s offers a lean solution, alternative Kubernetes distributions may necessitate additional resources, such as etcd. For comprehensive insights into required components and parameterization, exploring the chart templates code      link-icon proves invaluable. These templates offer a nuanced understanding of vCluster's flexibility and configurability, guiding users towards optimized deployments tailored to their specific needs.

The fastest way to get started with vCluster

The quickest way to get started is by using a kind cluster. To play around with vCluster, you'll need the Kubectl      link-icon, kind      link-icon, and the vCluster      link-icon binaries. Follow the documentation on the project's website for installation instructions.

To create a kind cluster, simply run the following command:

kind create cluster

After setting up the kind cluster, you can create a vCluster using the vCluster binary. Here's an example of how to create a vCluster named "test1":

vcluster create test1

The command will initiate the creation process, and you'll see updates as the vCluster is set up. Once the vCluster is created successfully, It will switch the kubectl context, and you're ready to use your vCluster for testing and development purposes. You can interact with it using kubectl as you would with any other Kubernetes cluster:

kubectl get namespaces

And that's it! Now we can start using the cluster. When we're done, we can switch back to the underlying cluster by running the vCluster disconnect command.

Managing short-lived vCluster instances with Jenkins pipeline

We'll explore how to use vClusters as a tool for creating development environments. We'll demonstrate its integration into a CI/CD pipeline using Jenkins. While our example uses RKE2 Kubernetes distribution as the underlying cluster, vCluster can be deployed on any Kubernetes distribution, including development distributions like kind or minikube.

Let's take a look at a sample Jenkins job manifest that demonstrates how to create vCluster environments dynamically:

pipeline {
    agent any
    environment {
        KUBECONFIG=credentials("example-cluster-kubeconfig")
        VCLUSTER_HOST= "${params.VCLUSTER_NAME}.example-vcluster.intra.example.com"
    }
    parameters {
        string(name: 'VCLUSTER_NAME', defaultValue: '', description: 'Name of vcluster.')
        string(name: 'VCLUSTER_TTL', defaultValue: "3", description: 'Choose how long vcluster should be available(days). 0 for infinity')
    }
    stages {
        stage('Checkout code') {
            steps {
                cleanWs()
                checkout([
                    $class: 'GitSCM', branches: [[name: 'master']],
                    userRemoteConfigs: [[url: 'https://gitlab.intra.example.com/example/vclusters.git', credentialsId: 'example-vcluster-gitlab']]
                ])
            }
        }
        stage("Create vcluster") {
            steps {
                script {
                    def vclusterImage = docker.build(
                        "vcluster:0.0.1",
                        "-f docker/vcluster/Dockerfile docker/vcluster"
                    )
                    vclusterImage.inside('-u root:root') {
                        sh """
                            if [ ${params.VCLUSTER_TTL} -eq 0 ]; then
                                VCLUSTER_TTL_INCR=\$(( 36500 * 24 ))
                            else
                                VCLUSTER_TTL_INCR=\$(( ${params.VCLUSTER_TTL} * 24 ))
                            fi
                            yq -i "
                                .globalAnnotations.vcluster-ttl = now +\\"\${VCLUSTER_TTL_INCR}h\\" |
                                .ingress.host = \\"${env.VCLUSTER_HOST}\\"
                            " jenkins/inputs/vcluster-helm-values.yaml
                            vcluster create \
                                --connect=false \
                                -f jenkins/inputs/vcluster-helm-values.yaml \
                                ${params.VCLUSTER_NAME}
                            vcluster connect \
                                --service-account admin \
                                --cluster-role cluster-admin \
                                --update-current=false \
                                --insecure \
                                --server=https://${env.VCLUSTER_HOST} \
                                --kube-config=vcluster_kubeconfig_${params.VCLUSTER_NAME}.yaml \
                                ${params.VCLUSTER_NAME}
                        """
                    }
                }
            }
        }
    }
    post {
        success {
            script {
                archiveArtifacts artifacts: "vcluster_kubeconfig_${params.VCLUSTER_NAME}.yaml", fingerprint: true
            }
        }
    }
}

In this Jenkins job manifest, we define a pipeline that includes stages for checking out code and creating a vCluster environment dynamically. The parameters allow customization of the vCluster's name and time-to-live (TTL). Once the vCluster is created, its configuration is archived for later use.

To facilitate the creation of virtual clusters within Jenkins, we utilize a Dockerfile:

FROM debian:12-slim

ARG VCLUSTER_VERSION=v0.15.7
ARG KUBECTL_VERSION=v1.27.4
ARG YQ_VERSION=v4.34.2

RUN apt-get update && apt-get install -y \
      ca-certificates \
      jq \
    && rm -rf /var/lib/apt/lists/*

ADD --chmod=755  https://dl.k8s.io/release/${KUBECTL_VERSION}/bin/linux/amd64/kubectl /usr/local/bin/kubectl

ADD --chmod=755 https://github.com/loft-sh/vcluster/releases/download/${VCLUSTER_VERSION}/vcluster-linux-amd64 /usr/local/bin/vcluster

ADD --chmod=755 https://github.com/mikefarah/yq/releases/download/${YQ_VERSION}/yq_linux_amd64 /usr/local/bin/yq

As each vCluster instance is a new Helm release installed on the cluster, we can pass the values to it just like we usually do with Helm Charts. Our job uses the following template file for generating values files for vClusters:

---
ingress:
  enabled: true
  ingressClassName: nginx
  host: test2.example-vcluster.intra.example.com

sync:
  ingresses:
    enabled: true

globalAnnotations:
  vcluster-ttl: "2023-07-30T20:13:59Z"

In the template, we enable the ingress feature and define a vhost for vCluster, which will be overridden by the yq tool during pipeline execution. Enabling ingress gives us the opportunity to share access to the vCluster Kubernetes API and applications deployed on it easily.

In the template, we also define the TTL, which determines how long the newly created environment should live. For removing old environments based on it, we have another job with the following manifest:

pipeline {
    agent any
    triggers {
        cron('H */6 * * *')
    }
    environment {
        KUBECONFIG=credentials("example-cluster-kubeconfig")

    }
    stages {
        stage('Checkout code') {
            steps {
                cleanWs()
                checkout([
                    $class: 'GitSCM', branches: [[name: 'master']],
                    userRemoteConfigs: [[url: 'https://gitlab.intra.example.com/example/vclusters.git', credentialsId: 'example-vcluster-gitlab']]
                ])
            }
        }
        stage("Delete expired vclusters") {
            steps {
                script {
                    def vclusterImage = docker.build(
                        "vcluster:0.0.1",
                        "-f docker/vcluster/Dockerfile docker/vcluster"
                    )
                    vclusterImage.inside('-u root:root') {
                        sh """#!/bin/bash
                            VCLUSTERS=\$(vcluster list --output json | jq -r '.[].Name')

                            if [ -n "\${VCLUSTERS}" ]; then
                                for vcluster in \${VCLUSTERS}; do
                                    VCLUSTER_TTL=\$(kubectl get sts  -n vcluster-\${vcluster} \${vcluster} -o  jsonpath='{.metadata.annotations.vcluster-ttl}')
                                    if [ \$(date -u '+%s') -ge \$(date -u -d \${VCLUSTER_TTL} '+%s') ]; then
                                        echo "Removing expired vcluster \${vcluster}"
                                        vcluster delete \${vcluster}
                                    else
                                        echo "TTL for vcluster \${vcluster} is \${VCLUSTER_TTL}"
                                    fi
                                done
                            fi
                        """
                    }
                }
            }
        }
    }
}

These pipelines can be chained in other jobs, giving us the opportunity to spin up short-lived Kubernetes clusters, on top of which applications can be deployed and tested. Or we can spin up fully functional clusters for any other purpose.

Summary

vCluster offers a cost-effective, efficient solution for creating isolated Kubernetes environments within existing clusters. By integrating vCluster into CI/CD pipelines, development teams can accelerate workflows, reduce overhead, and improve collaboration. With its rapid provisioning, low overhead, and robust security features, vCluster empowers teams to deliver high-quality software with confidence.

Gromowski  Tomasz

Tomasz Gromowski

DevOps Engineer

Tomasz Gromowski is a DevOps engineer and author on CodiLime's blog. Check out the author's articles on the blogRead about author >

Read also

Get your project estimate

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

For businesses that need support in their software or network engineering projects, please fill in the form and we’ll get back to you within one business day.

We guarantee 100% privacy.

Trusted by leaders:

Cisco Systems
Palo Alto Services
Equinix
Jupiter Networks
Nutanix