Blog | Blue Matador

Building and deploying a Docker image to a Kubernetes cluster

Written by Keilan Jackson | Sep 10, 2020 5:55:33 PM

Deploying Docker images to Kubernetes is a great way to run your application in an easily scalable way. Getting started with your first Kubernetes deployment can be a little daunting if you are new to Docker and Kubernetes, but with a little bit of preparation, your application will be running in no time. In this blog post, we will cover the basic steps needed to build Docker images and deploy them to a Kubernetes cluster. The topics we will cover are:


Building Docker images for Kubernetes

The first step to deploying your application to Kubernetes is to build your Docker images. In this guide, I will assume you already have created Docker images in development to create your application, and we will focus on tagging and storing production-ready Docker images in an image repository.

The first step is to run docker image build. We pass in . as the only argument to specify that it should build using the current directory. This command looks for a Dockerfile in your current directory and attempts to build a docker image as described in the Dockerfile.

docker image build .

If your Dockerfile takes arguments such as ARG app_name, you can pass those arguments into the build command:

docker image build --build-arg “app_name=MyApp” .

You may run into a situation where you want to build your app from a different directory than the current one. This is especially useful if you are managing multiple Dockerfiles in separate directories for different applications which share some common files, and can help you write build scripts to handle more complex builds. Use the -f flag, to specify which dockerfile to build with:

docker image build -f “MyApp/Dockerfile” .

When using this method, be mindful that the paths referenced in your Dockerfile will be relative to the directory passed as the final argument, not the directory the Dockerfile is located in. So in this example, we will build the Dockerfile located at MyApp/Dockerfile but all paths referenced in that Dockerfile for COPY and other operations will actually be relative to the current working directory, not MyApp.

After your docker image has been built, you will then need to tag your image. Tagging is very important in a docker build and release pipeline since it is the only way to differentiate versions of your application. It is common practice to tag your newest images with the latest tag, but this will be insufficient for deploying to Kubernetes since you have to change the tag in your Kubernetes configuration to signal that a new image should be ran. Because of this, I recommend tagging your images with the git commit hash of the current commit. This way you can tie your docker images back to version control to see what has actually been deployed, and you have a unique identifier for each build.

To get the current commit hash programmatically and then tag your image, run:

git rev-parse --verify HEAD

You can then tag your image like so:

docker image tag $IMAGE MyApp:$COMMIT

Tagging your image after it is built can be useful for fixing up old images, but you can and should tag them as part of the build command using the -t argument. With everything put together, you could write a simple bash script to build and tag your image:

#!/bin/bash
COMMIT=$(git rev-parse --verify HEAD)
docker image build -f “MyApp/Dockerfile” . \
  --build-arg “app_name=MyApp” \
  -t “MyApp:latest” \
  -t “MyApp:${COMMIT}



Where to store Docker images

Now that you have your Docker images built and tagged, you need to store them somewhere besides on your laptop. Your Kubernetes cluster needs a fast and reliable Docker repository from which to pull your images, and there are many options for this.

One of the most popular Docker image repositories is dockerhub. For open source projects or public repositories, dockerhub is completely free. For private repositories, dockerhub has very reasonable pricing.

To push images to dockerhub, you must tag your images with the name of the dockerhub repository you created, and then push each tag. Here is an example of tagging and pushing the latest image built above:

docker image tag MyApp:latest myrepo/MyApp:latest
docker login 
docker push myrepo/MyApp:latest

For anyone already using Amazon Web Services, Amazon Elastic Container Registry provides cheap and private docker repositories. You can similarly tag and push docker images to your ECR repository if you have the AWS CLI installed. Just replace ECR_URL in the following example with the actual URL for your ECR repository, which can be viewed in the AWS Web Console.

docker image tag MyApp:latest ECR_URL/MyApp:latest
eval $(aws ecr get-login --no-include-email)
docker push ECR_URL/MyApp:latest


Deploying Docker images with kubectl

Now that you have built and pushed your Docker images, you can deploy them to your Kubernetes cluster. The quickest way to get started is by using kubectl. You can create a Deployment in your cluster by following the Kubernetes documentation. Once you have configured your deployment, you will not need to modify most of the options when you update your app except for the image attribute on your containers.

To update an existing deployment, you can use the kubectl edit command. Simply update the image attribute for your containers and save the Deployment. The deployment will automatically create new pods with the new image you specified, and terminate pods using the old image in a controlled fashion. For an in-depth look at how Deployments perform rolling updates, check out our blog post.

Your Kubernetes cluster must have access to the repository. For public repositories, this should not be a problem. For private dockerhub repos, you can follow this guide to create a Kubernetes secret and allow your pods access to the private repo to pull images.

If you are using Amazon ECR as a private repo and also running your Kubernetes nodes on EC2, then you can use AWS IAM to give access for your nodes to read from the repository. If your Kubernetes nodes are not running on EC2, I would not recommend using ECR at all since AWS does not provide an easy way to get access outside of IAM. You can attempt solutions that automatically refresh ECR credentials such as this one, but the complexity may not be worth it.

 

Managing Kubernetes configuration files

Managing your Kubernetes deployments using kubectl edit should only be done in the short-term. You will probably want to use a more complex system which allows you to keep track of which versions of your application was deployed at different times, and allow you to easily replicate your environment. If you’ve made a bunch of ad-hoc changes to your Kubernetes resources, you will have a hard time recreating them in the event of a cluster failure.

One of the simplest ways to accomplish this is to store your Kubernetes configuration files in a git repository, and make those files the source of truth. When you update your application, update the configuration in your git repo and then use kubectl to sync the changes like so:

kubectl apply -f my_app_deployment.yaml

Then you can commit the changes you’ve made and have a history of every change in your Kubernetes cluster, and an easy way to recreate your entire setup if needed.  

Another commonly used tool for managing Kubernetes is Helm. Helm essentially acts as a repository for creating and managing Kubernetes resources from the community, and you can also use it for your own resources. A lot of helm users like the ability to use templates for Kubernetes resources which can speed up deployment for apps across multiple environments such as dev, QA, and production. One criticism is the added complexity, and the tiller process which must run to keep configuration in sync. In any case, using either of these methods is better than having no control and history of your Kubernetes configuration changes.

 

Creating build systems

Investing in a build system is a task that should be avoided until building and deploying becomes a major bottleneck for your application development. Iron out the details with how you will build and deploy your application manually before diving into a complicated and automated build system. For fast teams that are building and deploying multiple times per day, a build system is a must. Remember that you can tackle building and deploying separately: it may make sense to automate your builds but deploy manually for some time, until your team is ready for automatic deployments.

Jenkins is one of the most popular open-source build tools, and it can be integrated with Docker and Kubernetes to automate your build. If you are already a Jenkins user, then this is the approach I would recommend. Other people have written detailed guides for integrating Jenkins into a Kubernetes pipeline, so I will defer to them.

If you are not familiar with Jenkins, then you can look at the free offerings of TravisCI and CircleCI. Both of these SaaS products offer ways to build your docker images automatically and they have reasonable paid plans for when you are at a scale that requires it.

It can be convenient to run Jenkins or another open-source solution inside of your production Kubernetes environment, but I would advise against it. The issue is that if your production cluster is having issues, you do not want to be in a position where you cannot build or deploy your application. Treat your build system as a first-class citizen in the same way you do for your log management and alerting system. At the very least, have a backup method for building and deploying that you test regularly in case your primary method is unavailable.

Monitoring

If you are interested in a fully managed solution to monitoring Kubernetes deployments and other resources, check out Blue Matador. Blue Matador automatically does event detection and classification for dozens of Kubernetes events and metrics, all with very little setup. We also have Linux, Windows, and AWS integrations to meet all of your monitoring needs.

Conclusion

We’ve gone over how to build and tag docker images, and push them to either public or private docker repositories. We also covered how to update your Kubernetes deployments to use new images. Lastly we went over how to manage your Kubernetes configuration, and did a brief overview of several build systems for when you are ready to automate your build and deploy process.