Skip to main content

Kubernetes

Introduction

  • Terminology:
    • “Kubernetes” (“K8s”) is the entire orchestration system
    • The kubectl (“cube control”) CLI is the main way to configure K8s and manage apps
    • The “control plane” is a set of “master” containers that manage the cluster
      • API server
      • scheduler
      • controller manager
      • etcd (the container’s database)
      • Core DNS
      • possible to add more containers for storage, etc
    • A “cluster” is made up nodes and pods
    • A “node” is a single server in a K8s cluster
      • e.g. locally, Docker Desktop is a node
      • A “kubelet” is a K8s agent running on a node that communicates between the control plane and the container
      • kube-proxy
    • A “pod” controls one or more containers running together on a node
      • It’s the basic unit of deployment
      • Containers are always in and controlled by pods
      • A pod can have one or more containers
      • It’s not a concrete thing — its an abstract K8s concept
        • Even though Docker runs containers, with K8s you can only deploy “pods”
        • K8s then tells the container runtime (via the kubelet) to create the actual containers for you
        • Every type of resource that creates containers uses this pod concept
        • I create pods, K8s creates the containers
        • More specifically, I use YAML or kubectl to tell the control plane I want to create a pod; the kubelet sees a request for a pod on its node and determines how many containers are needed; the kubelet then tells Docker to create the necessary container
    • A “controller” creates and updates pods and other K8s objects
      • There are many types of controllers, e.g. Deployment, ReplicaSet, StatefulSet, DaemonSet, Job, CronJob, etc
      • By combining the pods with various controllers, you can create many different looking deployments
    • A “service” is a network endpoint you use to connect to a set of pods
      • A persistent endpoint in the cluster at a specific DNS name and port
    • A “namespace” is a filtered group of objects in a cluster (optional)
      • Not a security feature
      • Just a way to filter my kubectl views to only include what I want to see
      • e.g. default namespace filters out system resources
    • Q: A “resource” is…
      • Resources use the Labels and Selectors keys to find their child resources
        • e.g. Labels: app=my-app
        • A resource’s Selector lists the Labels it needs to match in child resources (i.e. a resource’s Labels are how a parent resource finds it)
    • There’s also secrets, ConfigMaps, etc
  • Installing locally:
    • many ways to install, each of which includes different containers
    • easiest way is to enable it in Docker Desktop
    • Play with Kubernetes - a free online K8s playground
  • Docker Mastery: The Complete Toolset From a Docker Captain • Includes an extensive introduction to K8s after introducing Docker, Compose and Swarm • Bret Fisher 🧑‍🎓
  • Kubernetes 101, part I, the fundamentals • Leandro Proença 📖
  • Inbox:

General

Pods

Deployments & ReplicaSets

  • A Deployment is the most common K8s resource type for most purposes
    • other controller types like CronJob have very specialized uses, but most apps will end up using a Deployment controller
  • A Deployment contains one or more ReplicaSets, which contain one or more Pods
    • The Deployment controls the ReplicaSet
    • The ReplicaSet control the pods
  • Two ways to deploy pods = kubectl create deploy or YAML
  • Creating a deployment with kubectl:
    • kubectl create deploy <name> --image <image name>
      • deploy can be rephrased as deployment or deployments
      • the command asks the control plane for a new deployment for your changes
      • the control plane then triggers a new ReplicaSet to be created alongside the existing ReplicaSet
      • the kubelet creates the new ReplicaSet’s pods
      • the old ReplicaSet remains in place while the new ReplicaSet comes online (in case something is wrong with the new deployment)
      • see kubectl create deploy --help for all options
    • kubectl get all = after creating a deployment, check what resources now exist (likely one or more pods, then the service, then the deployment, then the replicaset)
  • Scaling ReplicaSets:
    • kubectl scale deploy/<name> --replicas <number> = change the number of desired total pods in the deployment
      • deployment can be rephrased as deployments or deploy
      • deploy <name> can be rephrased as deploy/name
      • does not create a new deployment or new replicaset; just changes the existing deployment record the update the number of pods
      • the scheduler will see a new pod has been requested and assigns a node to it
      • kubelet sees the new pod and tells Docker to start the container
  • Self-healing:
  • Deployments vs ReplicaSets:
    • Compared to creating a ReplicaSet directly, a deployment adds the ability to easily scale the number of replicas and rollout/rollback changes with no downtime
    • kubectl rollout status deploy/<name> shows real time updates during a rollout
    • kubectl rollout undo deploy/<name> allows a faster way to roll back changes that caused issues than git revert, since it doesn’t risk getting slowed down by merge conflicts or CI pipelines. So, it’s a good first step before doing that git revert as a follow up.
    • kubectl describe deploy/<name> | grep revision to get the revision number of the current deployment (to compare it to the available revisions listed by kubectl rollout history deploy/<name> and roll back to one of them with kubectl rollout undo deploy/<name> --to-revision=<number>)
    • Kubernetes 101, part IV, deployments • Leandro Proença 📖
  • Inbox:

Services

  • A “service” is a stable IP address for pods
  • If we want to connect to pods (and their containers), we need a service
  • CoreDNS allows us to resolve services by name (i.e. a friendly hostname)
  • There are four available service types:
    • ClusterIP = the default; always available; only available in the cluster; one set of pods talking to another set of pods; only reachable from within the cluster (nodes and pods); pods can reach the service on the app’s port number; doesn’t need any special firewall rules since it only works in the cluster
      • Creating a ClusterIP service (by exposing a port):
        • kubectl get pods --watch = watch what happens with the following commands in a separate window
        • kubectl create deploy <name> --image <image name> = create deployment
        • kubectl scale deploy/<name> --replicas 5 = scale up to 5 pods
        • kubectl expose deploy/<name> --port 8888 = create a ClusterIP service (by default) with the same name as the deployment (by default)
        • kubectl get service = inspect running services
          • shows the type, cluster IP, external IP (if any), ports and age
      • cURL a ClusterIP service:
        • Remember, this service type isn’t accessible from outside the cluster, so we’re going to need to hop into the cluster by creating a pod and connecting to its shell
        • kubectl run tmp-shell --rm -it --image bretfisher/netshoot = create one pod with one container using the netshoot image (which includes terminal troubleshooting utilities), open a shell (-it), run the default CMD (zsh), and delete itself when I exit the shell (--rm)
        • curl <service name>:8888 = connect to the ClusterIP service by its name and port
          • Each time you run the command, there’s a chance the request will be handled by a different pod (indicated by the HOSTNAME in the cURL reponse)
          • Run the command a bunch of times to see the HOSTNAME change
    • NodePort = always available; different from ClusterIP in that it’s designed to allow anything outside the service to talk to the cluster through a high port number that’s allocated on each node; that port is open on every node’s IP; anyone can connect through it; other pods need to be updated to use this port; automatically creates a ClusterIP service to use inside the cluster (if one doesn’t already exist)
      • Creating a NodePort service (by exposing a port of type NodePort):
        • kubectl get pods --watch = watch what happens with the following commands in a separate window
        • kubectl create deploy <name> --image <image name> = create deployment
        • kubectl scale deploy/<name> --replicas 5 = scale up to 5 pods
        • kubectl expose deploy/<name> --port 8888 --name <service-name> --type NodePort = create a NodePort service with a custom name
        • kubectl get service = inspect running services
          • shows the type, cluster IP, external IP (if any), ports (node port is to the right of the :) and age
      • cURL a NodePort service:
        • This service is accessible externally (e.g. from my laptop environment), so I can curl via the IP of the host (localhost if working locally) and the node port shown in kubectl get services
        • curl localhost:<node port> = connect to the node via its exposed port (always a high number)
    • LoadBalancer = mostly used in the cloud; controls an external load balancer endpoint through the command line; creates a cluster and node port, then uses third party load balancer solution to automatically set up load balancing; the node port passes requests from the exposed port to the node port; only for traffic coming into cluster from outside; requires an external provider that K8s can talk to remotely; automatically creates a ClusterIP service to use inside the cluster (if one doesn’t already exist) and a NodePort service to expose the cluster externally (if one doesn’t already exist)
      • kubectl expose deploy/<name> --port 8888 --name <service name> --type LoadBalancer = if using Docker Desktop, a load balancer resource is includeed (otherwise you need a cloud provider)
      • curl localhost:8888 (can also use the node port directly)
    • ExternalName = used left often; relates to things in the cluster talking to things outside the cluster; adds CNAME DNS record to CoreDNS only to allow pods to reach external service; not used for pods, but for giving pods a DNS name to use for something outside K8s; useful when you don’t have control over the external DNS
  • Multiple services can be running at a time, but they need to have unique names
  • DNS for K8s services:
    • A DNS server is an optional add-on
    • CoreDNS is the default
    • DNS-based service discovery = when you create a service, you get a hostname to access the service (curl <hostname>)
      • But that only works for Services in the same NameSpace
      • kubectl get namespaces
      • Since you can have resources with the same names in different namespaces, to talk to a service in another namespace, you need to use the FQDN instead:
        • curl <hostname>.<namespace>.svc.cluster.local
  • K8s Ingress:
    • Another service type designed for HTTP traffic
  • Exposing pods (and their containers) with containers:
    • kubectl expose = create a service for existing pods

YAML config

  • A declarative replacement for kubectl commands
  • Similar to how docker-compose.yaml can be a declarative replacement for docker run
  • Nice in that it can be easily version controlled and reused

kubectl

  • kubectl is a command-line tool for running commands against Kubernetes clusters
  • In general, it offers less functionality than using YAML
  • Q: when to use kubectl (what is it better at)…?
  • Three main commands:
    • kubectl run = start a single pod (similar to how docker run creates a single container)
      • Unlikely to be used in production (where kubectl create deployment is the norm) unless testing or debugging a particular container
    • kubectl create = create resources via CLI or YAML
      • e.g. kubectl create deploy
    • kubectl apply = create/update something via YAML
  • General:
    • kubectl version = test your connection to the API
    • kubectl get all = list common resources in namespace (many types are hidden by default)
      • Always includes service
  • Inspecting resources with kubectl get <resource>:
    • --help = see all options
    • -o wide = show more info columns
    • -o yaml = show all info in YAML format
    • Examples:
      • kubectl get all = list the common resources in the current namespace
      • kubectl get deploy/<name> = list deployment
      • kubectl get node/<name>
  • Inspecting resources with kubectl describe <resource>:
    • kubectl describe <resource type>/<name> = show the YAML for that resource
    • A good first step debugging before diving into container or system logs
    • The main advantage over get is it shows more info and outputs it as YAML
  • Watching running resources with --watch (-w):
    • e.g. kubectl get pods -w = start a long-running command to watch all pods in the namespace (e.g. will update if a pod is deleted)
    • kubectl get events -w = watch all events as they occur
  • Inspecting container logs with logs:
    • all K8s logs are container logs (that’s where the actually running processes are)
    • by default, container logs are stored in each node’s Docker runtime
    • generally, you’d want centralize your container logs using a third party solution that makes them searchable and usable for alerting and compiling metrics
    • Example commands:
      • kubectl logs <resource type>/<name> = show some container logs for that resource
        • kubectl logs deploy/<name> = gets logs from first container in a random pods in a deployment
        • kubectl logs -l app=<name> = show logs from all pods in a deployment matching a given label
          • view available labels by running kubectl describe deploy/<name> and searching the output for “Labels:”
          • labels trickle down, so labels on a Deployment will also be present on its ReplicaSets and Pods (which may gain additional labels of their own)
        • kubectl logs pod/<name> --all-containers=true = show logs for all containers in a specific pod
        • kubectl logs pod/my-pod-xxx-yyy -c <container name> = show logs for a specific container in a specific pod
          • can get the container name by running kubectl describe pod/<name> and searching the output for “Created container” or “Started container”
      • kubectl logs --follow --tail 1 = show the last log, then watch for new logs
        • useful when looking for any bad behaviour in any container
  • Cleanup commands:
    • kubectl delete pod/<name> = delete a pod
    • kubectl delete deploy/<name> = delete a deployment
    • kubectl get all = see what resources still remain
  • Pods:
    • kubectl run <container name> --image <image name> = deploy a pod
    • kubectl get pods = show list of running pods
  • Commands:
    • kubectl config view = view my local kubectl config
    • kubectl config current-context = see just the current context (i.e. which project I’m connected to locally)
    • kubectl config use-context <context name> = switch to a different context
    • kubectl set-context ... = edit the properties of a context
    • kubectl get pods --all-namespaces (or -A) = list all pods in all namespaces of a context; useful when you aren’t sure which namespace you need; rerun with -n <namespace> once you know
    • kubectl get namespaces = list all namespaces in a context
  • Inbox:

K8s Management

Lens

k9s

  • Tool for inspecting and interacting with Kubernetes clusters from the command line
  • Commands:
    • hjkl = move cursor around
    • enter = move in (view details)
    • esc = move out (go back up a level)
    • ? = show all other currently-available commands
  • K9s • K9s docs 📚

Argo

  • Argo CD • Argo CD docs 📚
  • How to kick off an cron job in the Argo UI:
    • go to the deployment and make sure the manifest has been updated (i.e. synced)
    • find the cron job’s container’s rectangle, open its ”…” menu, and choose “create job”
    • validate by watching the job go in grafana’s explore view with {namespace=“abc”, container=“cron job container”}

Inbox