Software Engineer for 10 years implementing multi-tenant infrastructure for numerous brands. I’ve been using Docker and Kubernetes in production environments for 5 years and have been utilizing it to run my home network for the past 2 years at a very tiny scale. In my free time I try and maintain my home infrastructure which varies from adding a random Raspberry Pi to explaining to my kids why the lamp keeps flickering.
Walk through the process of moving from a working application on your local machine to a running Kubernetes instance leveraging Docker and Google Cloud. Starting with how to “Docker-ize” an application and then publish the Docker image where it can be accessed using our Kubernetes cluster.
Create a production-ready Docker image from an existing application Deploy a Docker image to a Docker Registry Create and configure a Kubernetes cluster Deploy a Docker image onto the newly created Kubernetes cluster Demonstrate how to connect two Kubernetes services to one-another by deploying a database within our Kubernetes cluster.
We will take one of three different pre-created applications. Each application will contain different language/database approaches that can be ran within a Kubernetes environment and will have 3 pieces: application, database, and Nginx proxy. Depending on the interest of the group we will choose one of the pre-built options and begin configuring it to run within the Kubernetes environment.
After taking a general overview of Docker and Kubernetes, we will start creating our Docker assets that we will later run on our Kubernetes cluster. Based on the language preference of the group, we will start with one of the applications demonstrating the basics of creating a Docker configuration to build and run the application. Once the application is able to be run locally as a Docker container, we will deploy the Docker image to a publicly accessible Docker registry.
(Note: the Docker image will make a good catch-up point for anyone that’s fallen behind and is looking for a “reset” button.)
We will cover the different options for hosting a Kubernetes cluster; both for local development and production services. Utilizing Docker for Mac, Docker for Windows, or kubeadm for Linux when building and running our applications on our development machines. Once we’re ready to move to production-ready, we will discuss some of the popular options for running Kubernetes at scale.
Now that we have a Kubernetes cluster created, we will begin utilizing the Docker assets that we developed within the first segment to run all services required to run our application. This will include the requirements for persisting data from the Kubernetes services to a location on our local disk, configuring the DNS and exposing firewall ports to allow us to properly route traffic through our Nginx instance. Once we have a fully functional Kubernetes service running on our local machines, we can discuss some of the options for using the same configuration files to deploy a cloud-based Kubernetes environment.
Pending how much time remains, we can cover the remaining two applications and discuss some of the other offers for interacting with a Kubernetes cluster: helm, Heptio, and Istio.
Describe and/or implement the Continuous Integration pipelines for build, test, and deploy to our Kubernetes cluster based on a Github hook.