Kubernetes Jenkins Master-Slave: Scaling the Scalability Issue

Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Source – devops.com

Dell and many other companies use Jenkins as a CI/CD tool and builds happen with substantial code bases that run 365 days in parallel. Generally, you may not have as much code to build, but the slave that’s created during the build process is in an upstate even after the build is completed. That can result in higher cost, higher unnecessary resource utilization and a more complex delivery pipeline.

Is there the process to overcome this situation? Yes. The solution is scalability.

Jenkins Scalability

One of the strongest features that Jenkins has is scaling—pushing beyond its normal limits of productivity. In this article we’re taking a look at the Jenkins master/slave model with one central Jenkins instance, referred to as master, and couple of slave executables called slaves, with the master responsible for scheduling the jobs across the slaves. Some features of slaves are:

  • Running multiple build tasks in parallel.
  • Replacing falsify Jenkins instances automatically.
  • Spinning up and terminating the slaves based on the requirement and need, which leads to cost reduction.

There are a few ways to implement Jenkins scaling; perhaps the easiest one is to implement is Jenkins scaling on Kubernetes.

This article will answer certain questions regarding the scaling process on Kuberenetes, including:

  • How is a slave pod created when a master generates a build?
  • How is the build assigned to the newly created slave?
  • How does the slave communication happens with the master during and after build?

Jenkins Master Installation

Prerequisites:

    • Jenkins Docker file
    • Kubernetes-Jenkins-deployment.yaml file
    • Kubernetes-Jenkins-service.yaml file
    • Kubernetes persistent volume
    • Kubernetes persistent volume claim ref:blazer.com

 

The preferred way to install Jenkins on Kubernetes is to create a custom Docker image based on the Jenkins base image. For now, you need to generate a Docker file that has the base image taken from Docker public repo, on which you can fine-tune your Jenkins application by adding plugins and installing Maven (RUN command) and alternative dependencies of your build along with Jenkins deployment.

The Docker file will prepare the image such that you don’t need to install any plugins after Jenkins deployment—it’s all done when the Jenkins Docker image is created.

Create a Jenkins-deployment.yaml file with deployment name, Docker custom Jenkins image, persistent volumes and other inputs, then deploy the application.

Since each deployment creates a pod, let’s check the status of the pod and how the Jenkins was deployed on the Kubernetes cluster.

Now, let’s create the service file for the Jenkins deployment to access the dashboard.

From the above image we can access the Jenkins dashboard with node ip and port number generated by the service file (32683).

Slave Configuration on Jenkins

The moment the Jenkins dashboard is up, checking to make sure the Kubernetes plugin is installed. If plugin installation is done, it will provide the options to configure slave details in  Manage Jenkins–>Configure System–>Kubernetes–>Add Cloud

Follow the below steps:

 

 

Then go to  Manage Jenkins–>Configure global security–>agents–>check the protocols JNLP agents fixed:50000 and (Java Web Start Agent Protocol/1) as mentioned in the below image.

Now, create a freestyle job and in the general tab check “Restrict where this project can be run” and give the slave-name that we have configured in the settings:

Workflow

  • When the job is triggered from master it will look for the slave configuration details.
  • Based on the configuration given in master using the JNLP Jenkins slave image, a slave pod is created and assigned to the build generated from the master until the build gets completed.
  • On a successful build, the slave will check for other build schedulers for a certain period of time. If there is no response from the master, the slave will post existing build results to the master and terminates.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x