Continuous Deployment with Google Container Engine and Kubernetes

Limited Time Offer!

For Less Than the Cost of a Starbucks Coffee, Access All DevOpsSchool Videos on YouTube Unlimitedly.
Master DevOps, SRE, DevSecOps Skills!

Enroll Now

Source:- semaphoreci.com

Introduction

This tutorial will show you how to deploy a sample microservices application to Kubernetes and set up continuous deployment using SemaphoreCI. It includes a crash introduction to Kubernetes, Google Container Engine, and building an automated deploy process.

Kubernetes, or “k8s” for short, is an orchestration tool for container native applications. Note that Kubernetes is about containers, and not only Docker containers. This tutorial uses Docker because it’s the current industry standard.

Kubernetes is a complex distributed system. This tutorial only requires access to a running Kubernetes cluster, and it shows you how to create a hosted cluster using Google Container Engine (or GKE). The tutorial assumes you have experience with Docker and the idea behind orchestration tools. All set? Let’s begin.

Introducing Kubernetes

Kubernetes is an open-source container orchestration tool for cloud native applications. Kubernetes is based on Google’s internal Borg orchestration tool. It’s a distributed system following the master/slave architecture. Kubernetes clusters may have multiple masters for high availability requirements. Master nodes manage containers on minion nodes. Minions run the “kublet” which handles communication with masters and coordinating the container runtime (e.g. Docker, Rocket).

Applications are modeled as “pods”. Pods contain one or more containers. A modern web application may have separate frontend and backend containers. These two containers form one pod. All containers in a pod run on the same node. Kubernetes handles all the networking and service discovery so containers may communicate to pod-local containers or to other containers in the cluster.

Pods are exposed to other pods via “services”. Kubernetes has a few different types of services, namely an external load balancer and a private proxy for internal access. Kubernetes can automatically create load balancers on supported cloud providers, e.g GCP or AWS.

Pods are the building blocks for higher level concepts. Kubernetes uses “deployments” to manage changing configuration (e.g. environment variables, container images, and more) for running pods. Deployments are connected to “Replica Controllers” for horizontal scaling. Deployments also use built-in liveness and readiness probes to monitor deployment rollouts.

You can interact with Kubernetes via the web dashboard or the kubectl CLI. kubectl uses YAML or JSON input to manage resources, e.g. create a pod/deployment/service. This tutorial uses kubectl exclusively.

There is so much more to Kubernetes. The project is extremely well documented with low level API documentation and high level user guides. Here are the main takeaways:

  1. Containers are grouped in “Pods”,
  2. “Services” expose “Pods” to the public internet or other pods in the cluster,
  3. “Deployments” manage scaling and configuring “Pods”, and
  4. kubectl is the CLI for cluster management.

These points should give you enough information to deploy the sample application.

The Sample Application

The sample application has three containers. The “server” container is a simple Node.js application. It accepts a POST request to increment a counter and a GET request to retrieve the counter. The counter is stored in redis. The “poller” container continually makes the GET request to the “server” to print the counter’s value. The “counter” container starts a loop and makes a POST request to the server to increment the counter with random values.

We’ll play with setting up the application ourselves using kubectl before building the deployment pipeline.

Pre-Launch Checklist

Here’s a rundown of everything you’ll need to complete the tutorial:

  • A Google Cloud Platform account with a billing method,
  • docker-compose installed to build and push images for the sample application,
  • gcloud CLI installed,
  • kubectl installed, and
  • A SemaphoreCI account.
Creating the GKE Cluster

First, create a new project in your GCP account. You can do this via the web console or the CLI. The CLI version is available via the alpha release track. Here’s the CLI version. Replace the semaphore-gke-tutorial with the name of your choosing:

gcloud alpha projects create semaphore-gke-tutorial 

Next, navigate to the GKE Dashboard with your project selected. You’ll see a message saying that GKE is not enabled yet because the project is not linked to a billing account. Click the button to select a billing account. This will take some time to kick in. You can refresh the web dashboard to check the status. You’ll see a blue “create container cluster” button once you’re good to go.

Time to create the GKE cluster. You may include the --zone option to change geographical region. United States is the default zone. Remember the zone you used. You’ll need this later.

    gcloud container clusters create demo \
        --zone europe-west1-a \
    --project semaphore-gke-tutorial

The --project sets the ID for the previously created project. The cluster is named demo. You can name it whatever you like. It will take some time to create the cluster. You’ll see something like this when it completes:

  gcloud container clusters create demo --zone europe-west1-b --project semaphore-gke-tutorial
  Creating cluster demo...done.
  Created [https://container.googleapis.com/v1/projects/semaphore-gke-tutorial/zones/europe-west1-b/clusters/demo].
  kubeconfig entry generated for demo.
  NAME  ZONE            MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
  demo  europe-west1-b  1.4.7           104.199.44.242  n1-standard-1  1.4.7         3          RUNNING

The next step is to get login credentials to use with kubetctl.

  gcloud container clusters get-credentials demo \
    --project semaphore-gke-tutorial \
    --zone europe-west1-b

This command creates the kubectl configuration “context” for this cluster. You may have configured multiple contexts for easy switching between multiple clusters. Run:

  kubectl config get-contexts

You’ll see something similar to the output below. You will have more output if you’ve configured multiple contexts.

    kubectl config get-contexts
    CURRENT   NAME                                             CLUSTER                                          AUTHINFO                                         NAMESPACE
    *         gke_semaphore-gke-tutorial_europe-west1-b_demo   gke_semaphore-gke-tutorial_europe-west1-b_demo   gke_semaphore-gke-tutorial_europe-west1-b_demo

Note the NAME column. You’ll need this value shortly. You can see that gcloud container clusters get-credentials has also set this to the current context, denoted with *. You can override this value by passing --context [CONTEXT] to every kubectl command. Let’s test the cluster by asking kubectl for all the pods in the cluster.

    kubectl get pods --context gke_semaphore-gke-tutorial_europe-west1-b_demo

There should be no output because we’ve not created any pods.

Congratulations! You’ve just created your first Kubernetes cluster using Google Container Engine. Now it’s time to build and run the sample application.

Running the Application

Let’s familiarize ourselves with the sample application. First clone (or fork) the source repo. Next, run:

  docker-compose up --build

This will build all the images and start the containers. Once the build process completes, you’ll see a lot of output streaming to your screen. Here’s a sample:

    server_1   | npm info it worked if it ends with ok
    server_1   | npm info using npm@3.10.8
    server_1   | npm info using node@v6.9.1
    server_1   | npm info lifecycle server@1.0.0~prestart: server@1.0.0
    server_1   | npm info lifecycle server@1.0.0~start: server@1.0.0
    server_1   |
    server_1   | > server@1.0.0 start /usr/src/app
    server_1   | > node server.js
    server_1   |
    server_1   | Server running on port 8080!
    counter_1  | Incrementing counter by 5 ...
    poller_1   | Current counter: 117
    counter_1  | Incrementing counter by 2 ...
    counter_1  | Incrementing counter by 10 ...
    poller_1   | Current counter: 129
    counter_1  | Incrementing counter by 7 ...
    counter_1  | Incrementing counter by 9 ...
    poller_1   | Current counter: 145
    counter_1  | Incrementing counter by 5 ...

You can see the poller is printing the counter. The counter is sending requests to increment by a random number.

Let’s get the application running on Kubernetes. First, we need to push our Docker images to a registry accessible to our cluster. GKE clusers are automatically authenticated to an associated Docker registry (Google Container Registry or GCR). This is the easiest way to manage private Docker images for GKE.

Open up docker-compose.yml in your source checkout. You’ll see some TODO items. Replace the zone subdomain and project ID to match your cluster. Refer to the GCR push docs for the list of regions to subdomain mappings.

Time to push images. We need access to the project’s GCR. This process is similar to the get-credentials command used earlier. Run:

  gcloud docker --authorize-only --project semaphore-gke-tutorial

This command generates a temporary ~/.config/docker entry for the registry. Now, use docker-compose to push images.

  docker-compose build
  docker-compose push

The Docker images are now accessible to our cluster. Time to create our first pod.

Creating the First Pod

Open up k8s/development-pod.yml in your source checkout. You’ll see TODO items. Knock those out. This file is effectively equivalent to docker-compose.yml. There are some Kubernetes specifics at the top (the apiVersion, kind, and metdata). Then, there is a list of containers. The expected parts are configured:

  • image: The image to use,
  • ports: ports to expose and protocols (TCP or UDP), and
  • env: Environment variables.

These are common parts you’ll see for most pods. Note that command is not specified. This is because each Docker image (check each Dockerfile in the source) sets the cmd. The images use environment variables for everything (thus the API_URL and REDIS_URL value).

Time to create our first pod. Set the default context to avoid passing --context to all future commands.

  kubectl config use-context gke_semaphore-gke-tutorial_europe-west1-b_demo

Next, we’ll create a namespace. A namespace isolates resources. Separating environments is a common use case. You may create a namespace for production, staging, test, and development. Using namespace is a best practice you should follow. Let’s create a development namespace now:

  kubectl create -f k8s/development-namespace.yml

The kubectl create command deals with files specified by the -f option. You may use YAML, JSON, or stdin. Next, create the pod:

  kubectl create -f k8s/development-pod.yml --namespace development

Next, check the pods in the development namespace.

  kubectl get pods --namespace development

You’ll see a list of pods and their status. Here’s an example:

  NAME      READY     STATUS    RESTARTS   AGE
  demo      4/4       Running   0          7s

Great! Our pod is running. If you see something that looks like an error, the image is probably incorrect. You can delete the pod, and try again with kubectl delete pod demo --namespace development.

Let’s get some detailed information on this pod:

  kubectl describe pod demo --namespace development

The kubectl describe command is an important debugging tool. The information is not immediately useful for this tutorial. However, it’s a good inclusion because you may need it to diagnose problems.

We should be able to find out the current counter. Let’s check the poller‘s logs:

  kubectl logs demo -c poller --namespace development

Congratulations — you’ve just created your first microservices application on Kubernetes. There are a few drawbacks though. First, this setup is not scaleable. Remember that all containers in a pod are scaled horizontally? Our pod has 4 containers: the redis server, API server, counter, and poller. Trying to scale this setup wouldn’t work. There would be N separate data stores creating N different counters. We can solve this problem by splitting the large pod into smaller pods and connecting them with Kubernetes services.

Let’s delete the pod we created before moving on:

  kubectl delete pod demo --namespace development
Production Services and Deployments

Let’s break up the application into fewer components. We’ll create one pod with the redis container. We’ll have one deployment for the server container, for horizontal scaling. Then, we’ll have one optional deployment for the poller and counter containers. These files are in the k8s folder and annotated with comments. Here’s the rundown:

  • k8s/production-redis-pod.yml – Pod to run the redis container,
  • k8s/production-redis-service.yml – Service to expose the redis pod to other pods (the server),
  • k8s/production-server-service.yml – Service to expose the server pod to other pods (the poller and counter),
  • k8s/production-server-deployment – Deployment for the server container, and
  • k8s/production-support-deployment – Deployment for the counter and poller containers.

This setup is deployed by:

  1. Creating the services,
  2. Creating the redis pod,
  3. Creating the server deployment, and
  4. Creating the support deployment.

The script/bootstrap contains all the commands to do so. The script takes the target namespace as an argument. Let’s deploy everything to the development namespace:

  script/bootstrap development

Note that the various YAML files may set the namespace in their metadata. It’s preferable not to do that, so the same resources may be reused in multiple namespaces, like we’ve done here.

Now, check the pods:

  kubectl get pods --namespace development

You’ll see that they have been created:

  NAME                      READY     STATUS    RESTARTS   AGE
  redis                     1/1       Running   0          41s
  server-506448125-0lq9m    1/1       Running   0          32s
  server-506448125-a4u1g    1/1       Running   0          32s
  server-506448125-m7srr    1/1       Running   0          32s
  support-592105180-9pl4j   2/2       Running   0          21s
  support-592105180-blnse   2/2       Running   0          21s
  support-592105180-xcqbn   2/2       Running   0          21s

Note that there are three server and support pods. Kubernetes has scaled our application without a problem. Let’s check the logs for the poller container in one of the support pods. Pick one of the support pods from the previous output:

  kubectl logs support-592105180-xcqbn -c poller --namespace development

If everything is working, you should see some counter lines. Now, let’s scale up our deployment. Open up k8s/production-support-deployment.yml and increase the replicas value. We’ll tell Kubernetes to apply the changes. Kubernetes will take care of the rest:

  kubectl apply -f k8s/production-support-deployment.yml --namespace development

The apply command is similar to create, except that it can create resources and update them with changes. You may use apply like create if the resource does not exist.

Check the pods again:

  kubectl get pods --namespace development

Notice the number of support pods has changed, depending on whether you scaled up or down. We can use this approach to set up continuous deployment.

Continuous Deployment with Semaphore CI

We walked through the initial process and modifying a running system manually. Now we need to automate it. The high level process looks like this:

  1. Install kubectl, gcloud, docker-compose, and docker,
  2. Authenticate the build with gcloud,
  3. Authenticate the build with kubectl,
  4. Authenticate the build with GCR,
  5. Push image to GCR, and
  6. Create/update the Kubernetes resources.

Start by signing up on Semaphore if you don’t already have an account, and creating a GCP service account for Semaphore CI. Open up IAM in the GCP console. Then click “Service Accounts”. Make sure the correct project is selected. Create a new service account with the “Owner” role and check “Furnish a new private key”. Press “create” and the authentication file will download to your machine. Refer to the GCP service account docs for more info.

Next, create a new project in Semaphore CI. Configure the Docker platform. Then, download service account authentication to create a new configuration file. The tutorial assumes the name is auth.json. Note that the UI shows the absolute path for this file. Use that value to set the GOOGLE_APPLICATION_CREDENTIALS environment variable. Do not use the ~ form. Use the full path. Here’s an example: /home/runner/auth.json. Replace auth.json with the name of your file.

Now, create a script to configure the build environment. Refer to script/ci/setup for the complete example. Complete the TODO items. Add this step to your SemaphoreCI project.

Next, write a deploy script. The deploy script handles the last two points in the process. Refer to script/ci/deploy for the complete example. It is almost the same as the earlier bootstrap script. kubectl apply is used because it will create or update resources accordingly. Thus, we can change the configuration by committing changes to the files in k8s/ and deploying. Add the script/ci/deploy command to your project.

Finally, push a build. You should see it go all the way through the pipeline and deploy everything to your Kubernetes cluster. Congratulations! Let’s check the production pods:

  kubectl get pods --namespace production

You can now try making changes the configuration files to scale out the application, commit, and redeploy.

Wrap Up

We’ve covered a lot of ground — a Kubernetes crash course, the different types of resources, and how to model a microservices application. We created a production-ready Kubernetes cluster via Google Container Engine, and hooked up Semaphore CI for continuous deployment. There is still room for improvement. Here’s what you can investigate next:

  • The user guides. The pod, service, and deployment guides will be the most helpful for beginners.
  • Stateful containers. Ideally, we would not run Redis in this way. We are not using a shared volume or any of the other stateful Kubernetes features. If the Redis pod is killed, then the counter value is lost. This setup is good enough for the tutorial, but not good enough for production.
  • Image tagging. Our image does not use Docker tags. This is also not good enough for a real-world system. The tutorial skipped them because they added complexity without contributing much to the end goal. You should try creating a build process that uses the git commit SHA as the image tag. This way, each “deploy” uses a unique image. Then, you can change the ImagePullPolicy to IfNotPresent instead of Always.
  • Public internet access. The tutorial does not expose the server deployment to the public internet. This is a trivial thing to do with Kubernetes. Check the service docs for the LoadBalancer type. Try updating the service file and deploying.

Good luck out there, and happy shipping!

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x